00:00:00.001 Started by upstream project "autotest-per-patch" build number 132575 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.010 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.010 The recommended git tool is: git 00:00:00.010 using credential 00000000-0000-0000-0000-000000000002 00:00:00.012 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.026 Fetching changes from the remote Git repository 00:00:00.030 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.042 Using shallow fetch with depth 1 00:00:00.042 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.042 > git --version # timeout=10 00:00:00.054 > git --version # 'git version 2.39.2' 00:00:00.054 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.073 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.073 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.261 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.273 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.284 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.284 > git config core.sparsecheckout # timeout=10 00:00:02.295 > git read-tree -mu HEAD # timeout=10 00:00:02.309 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.335 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.335 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.577 [Pipeline] Start of Pipeline 00:00:02.593 [Pipeline] library 00:00:02.595 Loading library shm_lib@master 00:00:02.595 Library shm_lib@master is due for a refresh after 30 minutes, clearing. 00:00:02.596 Caching library shm_lib@master 00:00:02.596 Attempting to resolve master from remote references... 00:00:02.596 > git --version # timeout=10 00:00:02.610 > git --version # 'git version 2.39.2' 00:00:02.610 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.621 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.621 > git ls-remote -- https://review.spdk.io/gerrit/a/build_pool/shm_lib # timeout=10 00:00:07.308 Found match: refs/heads/master revision fa9b922cb39dd3ae66a527ed92492856a7cced22 00:00:07.312 Selected Git installation does not exist. Using Default 00:00:07.312 The recommended git tool is: NONE 00:00:07.312 using credential 00000000-0000-0000-0000-000000000002 00:00:07.314 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_libs/6e27998ca6b735f457f1bf0490b425345ba4637a91de7f2498f417cb3d899827/.git # timeout=10 00:00:07.325 Fetching changes from the remote Git repository 00:00:07.328 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/shm_lib # timeout=10 00:00:07.337 Fetching without tags 00:00:07.337 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/shm_lib 00:00:07.337 > git --version # timeout=10 00:00:07.347 > git --version # 'git version 2.39.2' 00:00:07.347 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:07.358 Setting http proxy: proxy-dmz.intel.com:911 00:00:07.358 > git fetch --no-tags --force --progress -- https://review.spdk.io/gerrit/a/build_pool/shm_lib +refs/heads/*:refs/remotes/origin/* # timeout=10 00:00:11.095 Checking out Revision fa9b922cb39dd3ae66a527ed92492856a7cced22 (master) 00:00:11.095 > git config core.sparsecheckout # timeout=10 00:00:11.108 > git checkout -f fa9b922cb39dd3ae66a527ed92492856a7cced22 # timeout=10 00:00:11.129 Commit message: "vars/lib: Allow to pass log level while sending build details" 00:00:11.129 > git rev-list --no-walk fa9b922cb39dd3ae66a527ed92492856a7cced22 # timeout=10 00:00:11.227 [Pipeline] node 00:00:11.429 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest 00:00:11.435 [Pipeline] { 00:00:11.458 [Pipeline] catchError 00:00:11.464 [Pipeline] { 00:00:11.489 [Pipeline] wrap 00:00:11.501 [Pipeline] { 00:00:11.515 [Pipeline] stage 00:00:11.519 [Pipeline] { (Prologue) 00:00:11.547 [Pipeline] echo 00:00:11.550 Node: VM-host-SM17 00:00:11.559 [Pipeline] cleanWs 00:00:11.569 [WS-CLEANUP] Deleting project workspace... 00:00:11.569 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.574 [WS-CLEANUP] done 00:00:11.888 [Pipeline] setCustomBuildProperty 00:00:11.986 [Pipeline] httpRequest 00:00:14.427 [Pipeline] echo 00:00:14.429 Sorcerer 10.211.164.20 is alive 00:00:14.440 [Pipeline] retry 00:00:14.442 [Pipeline] { 00:00:14.459 [Pipeline] httpRequest 00:00:14.472 HttpMethod: GET 00:00:14.473 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.473 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.491 Response Code: HTTP/1.1 200 OK 00:00:14.492 Success: Status code 200 is in the accepted range: 200,404 00:00:14.492 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.354 [Pipeline] } 00:00:21.367 [Pipeline] // retry 00:00:21.374 [Pipeline] sh 00:00:21.653 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.665 [Pipeline] httpRequest 00:00:22.209 [Pipeline] echo 00:00:22.210 Sorcerer 10.211.164.20 is alive 00:00:22.218 [Pipeline] retry 00:00:22.219 [Pipeline] { 00:00:22.231 [Pipeline] httpRequest 00:00:22.235 HttpMethod: GET 00:00:22.236 URL: http://10.211.164.20/packages/spdk_df5e5465ce5f099923eeb3a57660df5af766360a.tar.gz 00:00:22.236 Sending request to url: http://10.211.164.20/packages/spdk_df5e5465ce5f099923eeb3a57660df5af766360a.tar.gz 00:00:22.246 Response Code: HTTP/1.1 200 OK 00:00:22.247 Success: Status code 200 is in the accepted range: 200,404 00:00:22.247 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_df5e5465ce5f099923eeb3a57660df5af766360a.tar.gz 00:02:53.750 [Pipeline] } 00:02:53.771 [Pipeline] // retry 00:02:53.781 [Pipeline] sh 00:02:54.064 + tar --no-same-owner -xf spdk_df5e5465ce5f099923eeb3a57660df5af766360a.tar.gz 00:02:56.614 [Pipeline] sh 00:02:56.895 + git -C spdk log --oneline -n5 00:02:56.895 df5e5465c test/common: [TEST] Keep list of tests of given suite in *.tests 00:02:56.895 f56ce2b18 test/common: [TEST] Simplify all_tests search 00:02:56.895 c25d82eb4 test/common: [TEST] Add __test_mapper stub 00:02:56.895 ff2e6bfe4 lib/lvol: cluster size must be a multiple of bs_dev->blocklen 00:02:56.895 9885e1d29 lib/blob: cluster_sz must be a multiple of PAGE 00:02:56.915 [Pipeline] writeFile 00:02:56.930 [Pipeline] sh 00:02:57.211 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:57.222 [Pipeline] sh 00:02:57.503 + cat autorun-spdk.conf 00:02:57.503 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.503 SPDK_RUN_ASAN=1 00:02:57.503 SPDK_RUN_UBSAN=1 00:02:57.503 SPDK_TEST_RAID=1 00:02:57.503 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:57.510 RUN_NIGHTLY=0 00:02:57.513 [Pipeline] } 00:02:57.529 [Pipeline] // stage 00:02:57.545 [Pipeline] stage 00:02:57.547 [Pipeline] { (Run VM) 00:02:57.560 [Pipeline] sh 00:02:57.841 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:57.841 + echo 'Start stage prepare_nvme.sh' 00:02:57.841 Start stage prepare_nvme.sh 00:02:57.841 + [[ -n 7 ]] 00:02:57.841 + disk_prefix=ex7 00:02:57.841 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:02:57.841 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:02:57.841 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:02:57.841 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.841 ++ SPDK_RUN_ASAN=1 00:02:57.841 ++ SPDK_RUN_UBSAN=1 00:02:57.841 ++ SPDK_TEST_RAID=1 00:02:57.841 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:57.841 ++ RUN_NIGHTLY=0 00:02:57.841 + cd /var/jenkins/workspace/raid-vg-autotest 00:02:57.841 + nvme_files=() 00:02:57.841 + declare -A nvme_files 00:02:57.841 + backend_dir=/var/lib/libvirt/images/backends 00:02:57.841 + nvme_files['nvme.img']=5G 00:02:57.841 + nvme_files['nvme-cmb.img']=5G 00:02:57.841 + nvme_files['nvme-multi0.img']=4G 00:02:57.841 + nvme_files['nvme-multi1.img']=4G 00:02:57.841 + nvme_files['nvme-multi2.img']=4G 00:02:57.841 + nvme_files['nvme-openstack.img']=8G 00:02:57.841 + nvme_files['nvme-zns.img']=5G 00:02:57.841 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:57.841 + (( SPDK_TEST_FTL == 1 )) 00:02:57.841 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:57.841 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:57.841 + for nvme in "${!nvme_files[@]}" 00:02:57.841 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:02:57.841 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:57.841 + for nvme in "${!nvme_files[@]}" 00:02:57.841 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:02:57.841 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:57.841 + for nvme in "${!nvme_files[@]}" 00:02:57.841 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:02:57.841 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:57.841 + for nvme in "${!nvme_files[@]}" 00:02:57.841 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:02:57.841 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:57.841 + for nvme in "${!nvme_files[@]}" 00:02:57.841 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:02:57.841 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:57.841 + for nvme in "${!nvme_files[@]}" 00:02:57.841 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:02:57.841 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:57.841 + for nvme in "${!nvme_files[@]}" 00:02:57.841 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:02:58.800 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:58.800 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:02:58.800 + echo 'End stage prepare_nvme.sh' 00:02:58.800 End stage prepare_nvme.sh 00:02:58.811 [Pipeline] sh 00:02:59.091 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:59.091 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:02:59.091 00:02:59.091 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:02:59.091 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:02:59.091 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:02:59.091 HELP=0 00:02:59.091 DRY_RUN=0 00:02:59.091 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:02:59.091 NVME_DISKS_TYPE=nvme,nvme, 00:02:59.091 NVME_AUTO_CREATE=0 00:02:59.091 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:02:59.091 NVME_CMB=,, 00:02:59.091 NVME_PMR=,, 00:02:59.091 NVME_ZNS=,, 00:02:59.091 NVME_MS=,, 00:02:59.091 NVME_FDP=,, 00:02:59.091 SPDK_VAGRANT_DISTRO=fedora39 00:02:59.091 SPDK_VAGRANT_VMCPU=10 00:02:59.091 SPDK_VAGRANT_VMRAM=12288 00:02:59.091 SPDK_VAGRANT_PROVIDER=libvirt 00:02:59.091 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:59.091 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:59.091 SPDK_OPENSTACK_NETWORK=0 00:02:59.091 VAGRANT_PACKAGE_BOX=0 00:02:59.091 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:59.091 FORCE_DISTRO=true 00:02:59.091 VAGRANT_BOX_VERSION= 00:02:59.091 EXTRA_VAGRANTFILES= 00:02:59.091 NIC_MODEL=e1000 00:02:59.091 00:02:59.091 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:02:59.091 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:03:01.628 Bringing machine 'default' up with 'libvirt' provider... 00:03:02.196 ==> default: Creating image (snapshot of base box volume). 00:03:02.455 ==> default: Creating domain with the following settings... 00:03:02.455 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732696498_4aa0839de66a6943fd3c 00:03:02.455 ==> default: -- Domain type: kvm 00:03:02.455 ==> default: -- Cpus: 10 00:03:02.455 ==> default: -- Feature: acpi 00:03:02.455 ==> default: -- Feature: apic 00:03:02.455 ==> default: -- Feature: pae 00:03:02.455 ==> default: -- Memory: 12288M 00:03:02.455 ==> default: -- Memory Backing: hugepages: 00:03:02.455 ==> default: -- Management MAC: 00:03:02.455 ==> default: -- Loader: 00:03:02.455 ==> default: -- Nvram: 00:03:02.455 ==> default: -- Base box: spdk/fedora39 00:03:02.455 ==> default: -- Storage pool: default 00:03:02.455 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732696498_4aa0839de66a6943fd3c.img (20G) 00:03:02.455 ==> default: -- Volume Cache: default 00:03:02.455 ==> default: -- Kernel: 00:03:02.455 ==> default: -- Initrd: 00:03:02.455 ==> default: -- Graphics Type: vnc 00:03:02.455 ==> default: -- Graphics Port: -1 00:03:02.455 ==> default: -- Graphics IP: 127.0.0.1 00:03:02.455 ==> default: -- Graphics Password: Not defined 00:03:02.455 ==> default: -- Video Type: cirrus 00:03:02.455 ==> default: -- Video VRAM: 9216 00:03:02.455 ==> default: -- Sound Type: 00:03:02.455 ==> default: -- Keymap: en-us 00:03:02.455 ==> default: -- TPM Path: 00:03:02.455 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:02.455 ==> default: -- Command line args: 00:03:02.455 ==> default: -> value=-device, 00:03:02.455 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:02.455 ==> default: -> value=-drive, 00:03:02.455 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:03:02.455 ==> default: -> value=-device, 00:03:02.455 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:02.455 ==> default: -> value=-device, 00:03:02.455 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:02.456 ==> default: -> value=-drive, 00:03:02.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:02.456 ==> default: -> value=-device, 00:03:02.456 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:02.456 ==> default: -> value=-drive, 00:03:02.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:02.456 ==> default: -> value=-device, 00:03:02.456 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:02.456 ==> default: -> value=-drive, 00:03:02.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:02.456 ==> default: -> value=-device, 00:03:02.456 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:02.456 ==> default: Creating shared folders metadata... 00:03:02.456 ==> default: Starting domain. 00:03:03.833 ==> default: Waiting for domain to get an IP address... 00:03:21.923 ==> default: Waiting for SSH to become available... 00:03:21.923 ==> default: Configuring and enabling network interfaces... 00:03:24.458 default: SSH address: 192.168.121.205:22 00:03:24.458 default: SSH username: vagrant 00:03:24.458 default: SSH auth method: private key 00:03:26.994 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:35.114 ==> default: Mounting SSHFS shared folder... 00:03:36.049 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:36.049 ==> default: Checking Mount.. 00:03:36.986 ==> default: Folder Successfully Mounted! 00:03:36.986 ==> default: Running provisioner: file... 00:03:37.921 default: ~/.gitconfig => .gitconfig 00:03:38.488 00:03:38.488 SUCCESS! 00:03:38.488 00:03:38.488 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:38.488 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:38.488 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:38.488 00:03:38.497 [Pipeline] } 00:03:38.514 [Pipeline] // stage 00:03:38.535 [Pipeline] dir 00:03:38.536 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:03:38.538 [Pipeline] { 00:03:38.553 [Pipeline] catchError 00:03:38.555 [Pipeline] { 00:03:38.569 [Pipeline] sh 00:03:38.850 + vagrant ssh-config --host vagrant 00:03:38.850 + sed -ne /^Host/,$p 00:03:38.850 + tee ssh_conf 00:03:43.064 Host vagrant 00:03:43.064 HostName 192.168.121.205 00:03:43.064 User vagrant 00:03:43.064 Port 22 00:03:43.064 UserKnownHostsFile /dev/null 00:03:43.064 StrictHostKeyChecking no 00:03:43.064 PasswordAuthentication no 00:03:43.064 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:43.064 IdentitiesOnly yes 00:03:43.064 LogLevel FATAL 00:03:43.064 ForwardAgent yes 00:03:43.064 ForwardX11 yes 00:03:43.064 00:03:43.076 [Pipeline] withEnv 00:03:43.078 [Pipeline] { 00:03:43.088 [Pipeline] sh 00:03:43.369 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:43.369 source /etc/os-release 00:03:43.369 [[ -e /image.version ]] && img=$(< /image.version) 00:03:43.369 # Minimal, systemd-like check. 00:03:43.369 if [[ -e /.dockerenv ]]; then 00:03:43.369 # Clear garbage from the node's name: 00:03:43.369 # agt-er_autotest_547-896 -> autotest_547-896 00:03:43.369 # $HOSTNAME is the actual container id 00:03:43.369 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:43.369 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:43.369 # We can assume this is a mount from a host where container is running, 00:03:43.369 # so fetch its hostname to easily identify the target swarm worker. 00:03:43.369 container="$(< /etc/hostname) ($agent)" 00:03:43.369 else 00:03:43.369 # Fallback 00:03:43.369 container=$agent 00:03:43.369 fi 00:03:43.369 fi 00:03:43.369 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:43.369 00:03:43.640 [Pipeline] } 00:03:43.657 [Pipeline] // withEnv 00:03:43.667 [Pipeline] setCustomBuildProperty 00:03:43.685 [Pipeline] stage 00:03:43.688 [Pipeline] { (Tests) 00:03:43.707 [Pipeline] sh 00:03:43.984 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:44.255 [Pipeline] sh 00:03:44.535 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:44.811 [Pipeline] timeout 00:03:44.811 Timeout set to expire in 1 hr 30 min 00:03:44.813 [Pipeline] { 00:03:44.831 [Pipeline] sh 00:03:45.108 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:45.675 HEAD is now at df5e5465c test/common: [TEST] Keep list of tests of given suite in *.tests 00:03:45.687 [Pipeline] sh 00:03:46.145 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:46.416 [Pipeline] sh 00:03:46.690 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:46.963 [Pipeline] sh 00:03:47.241 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:03:47.500 ++ readlink -f spdk_repo 00:03:47.500 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:47.500 + [[ -n /home/vagrant/spdk_repo ]] 00:03:47.500 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:47.500 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:47.500 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:47.500 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:47.500 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:47.500 + [[ raid-vg-autotest == pkgdep-* ]] 00:03:47.500 + cd /home/vagrant/spdk_repo 00:03:47.500 + source /etc/os-release 00:03:47.500 ++ NAME='Fedora Linux' 00:03:47.500 ++ VERSION='39 (Cloud Edition)' 00:03:47.500 ++ ID=fedora 00:03:47.500 ++ VERSION_ID=39 00:03:47.500 ++ VERSION_CODENAME= 00:03:47.500 ++ PLATFORM_ID=platform:f39 00:03:47.500 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:47.500 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:47.500 ++ LOGO=fedora-logo-icon 00:03:47.500 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:47.500 ++ HOME_URL=https://fedoraproject.org/ 00:03:47.500 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:47.500 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:47.500 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:47.500 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:47.500 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:47.500 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:47.500 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:47.500 ++ SUPPORT_END=2024-11-12 00:03:47.500 ++ VARIANT='Cloud Edition' 00:03:47.500 ++ VARIANT_ID=cloud 00:03:47.500 + uname -a 00:03:47.500 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:47.500 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:47.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.758 Hugepages 00:03:47.758 node hugesize free / total 00:03:48.016 node0 1048576kB 0 / 0 00:03:48.016 node0 2048kB 0 / 0 00:03:48.016 00:03:48.016 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:48.016 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:48.016 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:48.016 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:03:48.016 + rm -f /tmp/spdk-ld-path 00:03:48.016 + source autorun-spdk.conf 00:03:48.016 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:48.016 ++ SPDK_RUN_ASAN=1 00:03:48.016 ++ SPDK_RUN_UBSAN=1 00:03:48.016 ++ SPDK_TEST_RAID=1 00:03:48.016 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:48.016 ++ RUN_NIGHTLY=0 00:03:48.016 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:48.016 + [[ -n '' ]] 00:03:48.016 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:48.016 + for M in /var/spdk/build-*-manifest.txt 00:03:48.016 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:48.016 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:48.016 + for M in /var/spdk/build-*-manifest.txt 00:03:48.016 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:48.016 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:48.016 + for M in /var/spdk/build-*-manifest.txt 00:03:48.016 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:48.016 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:48.016 ++ uname 00:03:48.016 + [[ Linux == \L\i\n\u\x ]] 00:03:48.016 + sudo dmesg -T 00:03:48.016 + sudo dmesg --clear 00:03:48.017 + dmesg_pid=5204 00:03:48.017 + [[ Fedora Linux == FreeBSD ]] 00:03:48.017 + sudo dmesg -Tw 00:03:48.017 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:48.017 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:48.017 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:48.017 + [[ -x /usr/src/fio-static/fio ]] 00:03:48.017 + export FIO_BIN=/usr/src/fio-static/fio 00:03:48.017 + FIO_BIN=/usr/src/fio-static/fio 00:03:48.017 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:48.017 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:48.017 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:48.017 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:48.017 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:48.017 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:48.017 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:48.017 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:48.017 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:48.277 08:35:44 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:03:48.277 08:35:44 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:48.277 08:35:44 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:48.277 08:35:44 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:03:48.277 08:35:44 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:03:48.277 08:35:44 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:03:48.277 08:35:44 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:48.277 08:35:44 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:03:48.277 08:35:44 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:48.277 08:35:44 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:48.277 08:35:44 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:03:48.277 08:35:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:48.277 08:35:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:48.277 08:35:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:48.277 08:35:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:48.277 08:35:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:48.277 08:35:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.277 08:35:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.277 08:35:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.277 08:35:44 -- paths/export.sh@5 -- $ export PATH 00:03:48.277 08:35:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.277 08:35:44 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:48.277 08:35:44 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:48.277 08:35:44 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732696544.XXXXXX 00:03:48.277 08:35:44 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732696544.QhBRMK 00:03:48.277 08:35:44 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:48.277 08:35:44 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:48.277 08:35:44 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:48.277 08:35:44 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:48.277 08:35:44 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:48.277 08:35:44 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:48.277 08:35:44 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:48.277 08:35:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:48.277 08:35:44 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:03:48.277 08:35:44 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:48.277 08:35:44 -- pm/common@17 -- $ local monitor 00:03:48.277 08:35:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.277 08:35:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.277 08:35:44 -- pm/common@25 -- $ sleep 1 00:03:48.277 08:35:44 -- pm/common@21 -- $ date +%s 00:03:48.277 08:35:44 -- pm/common@21 -- $ date +%s 00:03:48.277 08:35:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732696544 00:03:48.277 08:35:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732696544 00:03:48.277 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732696544_collect-cpu-load.pm.log 00:03:48.277 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732696544_collect-vmstat.pm.log 00:03:49.216 08:35:45 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:49.216 08:35:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:49.216 08:35:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:49.216 08:35:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:49.216 08:35:45 -- spdk/autobuild.sh@16 -- $ date -u 00:03:49.216 Wed Nov 27 08:35:45 AM UTC 2024 00:03:49.216 08:35:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:49.216 v25.01-pre-239-gdf5e5465c 00:03:49.216 08:35:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:49.216 08:35:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:49.216 08:35:45 -- common/autotest_common.sh@1102 -- $ '[' 3 -le 1 ']' 00:03:49.216 08:35:45 -- common/autotest_common.sh@1108 -- $ xtrace_disable 00:03:49.216 08:35:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:49.216 ************************************ 00:03:49.216 START TEST asan 00:03:49.216 ************************************ 00:03:49.216 using asan 00:03:49.216 08:35:45 asan -- common/autotest_common.sh@1126 -- $ echo 'using asan' 00:03:49.216 00:03:49.216 real 0m0.000s 00:03:49.216 user 0m0.000s 00:03:49.216 sys 0m0.000s 00:03:49.216 08:35:45 asan -- common/autotest_common.sh@1127 -- $ xtrace_disable 00:03:49.216 ************************************ 00:03:49.216 END TEST asan 00:03:49.216 08:35:45 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:49.216 ************************************ 00:03:49.216 08:35:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:49.216 08:35:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:49.216 08:35:45 -- common/autotest_common.sh@1102 -- $ '[' 3 -le 1 ']' 00:03:49.216 08:35:45 -- common/autotest_common.sh@1108 -- $ xtrace_disable 00:03:49.216 08:35:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:49.476 ************************************ 00:03:49.476 START TEST ubsan 00:03:49.476 ************************************ 00:03:49.476 using ubsan 00:03:49.476 08:35:45 ubsan -- common/autotest_common.sh@1126 -- $ echo 'using ubsan' 00:03:49.476 00:03:49.476 real 0m0.000s 00:03:49.476 user 0m0.000s 00:03:49.476 sys 0m0.000s 00:03:49.476 08:35:45 ubsan -- common/autotest_common.sh@1127 -- $ xtrace_disable 00:03:49.476 08:35:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:49.476 ************************************ 00:03:49.476 END TEST ubsan 00:03:49.476 ************************************ 00:03:49.476 08:35:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:49.476 08:35:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:49.476 08:35:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:49.476 08:35:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:49.476 08:35:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:49.476 08:35:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:49.476 08:35:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:49.476 08:35:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:49.476 08:35:46 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:03:49.476 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:49.476 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:50.044 Using 'verbs' RDMA provider 00:04:05.890 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:18.131 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:18.131 Creating mk/config.mk...done. 00:04:18.131 Creating mk/cc.flags.mk...done. 00:04:18.131 Type 'make' to build. 00:04:18.131 08:36:14 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:18.131 08:36:14 -- common/autotest_common.sh@1102 -- $ '[' 3 -le 1 ']' 00:04:18.131 08:36:14 -- common/autotest_common.sh@1108 -- $ xtrace_disable 00:04:18.131 08:36:14 -- common/autotest_common.sh@10 -- $ set +x 00:04:18.131 ************************************ 00:04:18.131 START TEST make 00:04:18.131 ************************************ 00:04:18.131 08:36:14 make -- common/autotest_common.sh@1126 -- $ make -j10 00:04:18.131 make[1]: Nothing to be done for 'all'. 00:04:30.342 The Meson build system 00:04:30.342 Version: 1.5.0 00:04:30.342 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:30.342 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:30.342 Build type: native build 00:04:30.342 Program cat found: YES (/usr/bin/cat) 00:04:30.342 Project name: DPDK 00:04:30.342 Project version: 24.03.0 00:04:30.342 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:30.342 C linker for the host machine: cc ld.bfd 2.40-14 00:04:30.342 Host machine cpu family: x86_64 00:04:30.342 Host machine cpu: x86_64 00:04:30.342 Message: ## Building in Developer Mode ## 00:04:30.342 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:30.342 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:30.342 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:30.342 Program python3 found: YES (/usr/bin/python3) 00:04:30.342 Program cat found: YES (/usr/bin/cat) 00:04:30.342 Compiler for C supports arguments -march=native: YES 00:04:30.342 Checking for size of "void *" : 8 00:04:30.342 Checking for size of "void *" : 8 (cached) 00:04:30.342 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:30.342 Library m found: YES 00:04:30.342 Library numa found: YES 00:04:30.342 Has header "numaif.h" : YES 00:04:30.342 Library fdt found: NO 00:04:30.342 Library execinfo found: NO 00:04:30.342 Has header "execinfo.h" : YES 00:04:30.342 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:30.342 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:30.342 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:30.342 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:30.342 Run-time dependency openssl found: YES 3.1.1 00:04:30.342 Run-time dependency libpcap found: YES 1.10.4 00:04:30.342 Has header "pcap.h" with dependency libpcap: YES 00:04:30.342 Compiler for C supports arguments -Wcast-qual: YES 00:04:30.342 Compiler for C supports arguments -Wdeprecated: YES 00:04:30.342 Compiler for C supports arguments -Wformat: YES 00:04:30.342 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:30.342 Compiler for C supports arguments -Wformat-security: NO 00:04:30.342 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:30.342 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:30.342 Compiler for C supports arguments -Wnested-externs: YES 00:04:30.342 Compiler for C supports arguments -Wold-style-definition: YES 00:04:30.342 Compiler for C supports arguments -Wpointer-arith: YES 00:04:30.342 Compiler for C supports arguments -Wsign-compare: YES 00:04:30.342 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:30.342 Compiler for C supports arguments -Wundef: YES 00:04:30.342 Compiler for C supports arguments -Wwrite-strings: YES 00:04:30.342 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:30.342 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:30.342 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:30.342 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:30.342 Program objdump found: YES (/usr/bin/objdump) 00:04:30.342 Compiler for C supports arguments -mavx512f: YES 00:04:30.342 Checking if "AVX512 checking" compiles: YES 00:04:30.342 Fetching value of define "__SSE4_2__" : 1 00:04:30.342 Fetching value of define "__AES__" : 1 00:04:30.342 Fetching value of define "__AVX__" : 1 00:04:30.342 Fetching value of define "__AVX2__" : 1 00:04:30.342 Fetching value of define "__AVX512BW__" : (undefined) 00:04:30.342 Fetching value of define "__AVX512CD__" : (undefined) 00:04:30.342 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:30.342 Fetching value of define "__AVX512F__" : (undefined) 00:04:30.342 Fetching value of define "__AVX512VL__" : (undefined) 00:04:30.342 Fetching value of define "__PCLMUL__" : 1 00:04:30.342 Fetching value of define "__RDRND__" : 1 00:04:30.342 Fetching value of define "__RDSEED__" : 1 00:04:30.342 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:30.342 Fetching value of define "__znver1__" : (undefined) 00:04:30.342 Fetching value of define "__znver2__" : (undefined) 00:04:30.342 Fetching value of define "__znver3__" : (undefined) 00:04:30.342 Fetching value of define "__znver4__" : (undefined) 00:04:30.342 Library asan found: YES 00:04:30.342 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:30.342 Message: lib/log: Defining dependency "log" 00:04:30.342 Message: lib/kvargs: Defining dependency "kvargs" 00:04:30.342 Message: lib/telemetry: Defining dependency "telemetry" 00:04:30.342 Library rt found: YES 00:04:30.342 Checking for function "getentropy" : NO 00:04:30.342 Message: lib/eal: Defining dependency "eal" 00:04:30.342 Message: lib/ring: Defining dependency "ring" 00:04:30.342 Message: lib/rcu: Defining dependency "rcu" 00:04:30.342 Message: lib/mempool: Defining dependency "mempool" 00:04:30.342 Message: lib/mbuf: Defining dependency "mbuf" 00:04:30.342 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:30.342 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:30.342 Compiler for C supports arguments -mpclmul: YES 00:04:30.342 Compiler for C supports arguments -maes: YES 00:04:30.342 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:30.342 Compiler for C supports arguments -mavx512bw: YES 00:04:30.342 Compiler for C supports arguments -mavx512dq: YES 00:04:30.342 Compiler for C supports arguments -mavx512vl: YES 00:04:30.342 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:30.342 Compiler for C supports arguments -mavx2: YES 00:04:30.342 Compiler for C supports arguments -mavx: YES 00:04:30.342 Message: lib/net: Defining dependency "net" 00:04:30.342 Message: lib/meter: Defining dependency "meter" 00:04:30.342 Message: lib/ethdev: Defining dependency "ethdev" 00:04:30.342 Message: lib/pci: Defining dependency "pci" 00:04:30.342 Message: lib/cmdline: Defining dependency "cmdline" 00:04:30.342 Message: lib/hash: Defining dependency "hash" 00:04:30.342 Message: lib/timer: Defining dependency "timer" 00:04:30.342 Message: lib/compressdev: Defining dependency "compressdev" 00:04:30.342 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:30.342 Message: lib/dmadev: Defining dependency "dmadev" 00:04:30.342 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:30.342 Message: lib/power: Defining dependency "power" 00:04:30.342 Message: lib/reorder: Defining dependency "reorder" 00:04:30.342 Message: lib/security: Defining dependency "security" 00:04:30.342 Has header "linux/userfaultfd.h" : YES 00:04:30.342 Has header "linux/vduse.h" : YES 00:04:30.342 Message: lib/vhost: Defining dependency "vhost" 00:04:30.342 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:30.342 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:30.342 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:30.342 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:30.342 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:30.342 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:30.342 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:30.342 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:30.342 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:30.342 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:30.342 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:30.342 Configuring doxy-api-html.conf using configuration 00:04:30.342 Configuring doxy-api-man.conf using configuration 00:04:30.342 Program mandb found: YES (/usr/bin/mandb) 00:04:30.342 Program sphinx-build found: NO 00:04:30.342 Configuring rte_build_config.h using configuration 00:04:30.342 Message: 00:04:30.342 ================= 00:04:30.342 Applications Enabled 00:04:30.342 ================= 00:04:30.342 00:04:30.342 apps: 00:04:30.342 00:04:30.342 00:04:30.342 Message: 00:04:30.342 ================= 00:04:30.342 Libraries Enabled 00:04:30.342 ================= 00:04:30.342 00:04:30.342 libs: 00:04:30.342 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:30.342 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:30.342 cryptodev, dmadev, power, reorder, security, vhost, 00:04:30.342 00:04:30.342 Message: 00:04:30.342 =============== 00:04:30.342 Drivers Enabled 00:04:30.342 =============== 00:04:30.342 00:04:30.342 common: 00:04:30.342 00:04:30.342 bus: 00:04:30.342 pci, vdev, 00:04:30.342 mempool: 00:04:30.342 ring, 00:04:30.342 dma: 00:04:30.342 00:04:30.342 net: 00:04:30.342 00:04:30.342 crypto: 00:04:30.342 00:04:30.342 compress: 00:04:30.342 00:04:30.342 vdpa: 00:04:30.342 00:04:30.343 00:04:30.343 Message: 00:04:30.343 ================= 00:04:30.343 Content Skipped 00:04:30.343 ================= 00:04:30.343 00:04:30.343 apps: 00:04:30.343 dumpcap: explicitly disabled via build config 00:04:30.343 graph: explicitly disabled via build config 00:04:30.343 pdump: explicitly disabled via build config 00:04:30.343 proc-info: explicitly disabled via build config 00:04:30.343 test-acl: explicitly disabled via build config 00:04:30.343 test-bbdev: explicitly disabled via build config 00:04:30.343 test-cmdline: explicitly disabled via build config 00:04:30.343 test-compress-perf: explicitly disabled via build config 00:04:30.343 test-crypto-perf: explicitly disabled via build config 00:04:30.343 test-dma-perf: explicitly disabled via build config 00:04:30.343 test-eventdev: explicitly disabled via build config 00:04:30.343 test-fib: explicitly disabled via build config 00:04:30.343 test-flow-perf: explicitly disabled via build config 00:04:30.343 test-gpudev: explicitly disabled via build config 00:04:30.343 test-mldev: explicitly disabled via build config 00:04:30.343 test-pipeline: explicitly disabled via build config 00:04:30.343 test-pmd: explicitly disabled via build config 00:04:30.343 test-regex: explicitly disabled via build config 00:04:30.343 test-sad: explicitly disabled via build config 00:04:30.343 test-security-perf: explicitly disabled via build config 00:04:30.343 00:04:30.343 libs: 00:04:30.343 argparse: explicitly disabled via build config 00:04:30.343 metrics: explicitly disabled via build config 00:04:30.343 acl: explicitly disabled via build config 00:04:30.343 bbdev: explicitly disabled via build config 00:04:30.343 bitratestats: explicitly disabled via build config 00:04:30.343 bpf: explicitly disabled via build config 00:04:30.343 cfgfile: explicitly disabled via build config 00:04:30.343 distributor: explicitly disabled via build config 00:04:30.343 efd: explicitly disabled via build config 00:04:30.343 eventdev: explicitly disabled via build config 00:04:30.343 dispatcher: explicitly disabled via build config 00:04:30.343 gpudev: explicitly disabled via build config 00:04:30.343 gro: explicitly disabled via build config 00:04:30.343 gso: explicitly disabled via build config 00:04:30.343 ip_frag: explicitly disabled via build config 00:04:30.343 jobstats: explicitly disabled via build config 00:04:30.343 latencystats: explicitly disabled via build config 00:04:30.343 lpm: explicitly disabled via build config 00:04:30.343 member: explicitly disabled via build config 00:04:30.343 pcapng: explicitly disabled via build config 00:04:30.343 rawdev: explicitly disabled via build config 00:04:30.343 regexdev: explicitly disabled via build config 00:04:30.343 mldev: explicitly disabled via build config 00:04:30.343 rib: explicitly disabled via build config 00:04:30.343 sched: explicitly disabled via build config 00:04:30.343 stack: explicitly disabled via build config 00:04:30.343 ipsec: explicitly disabled via build config 00:04:30.343 pdcp: explicitly disabled via build config 00:04:30.343 fib: explicitly disabled via build config 00:04:30.343 port: explicitly disabled via build config 00:04:30.343 pdump: explicitly disabled via build config 00:04:30.343 table: explicitly disabled via build config 00:04:30.343 pipeline: explicitly disabled via build config 00:04:30.343 graph: explicitly disabled via build config 00:04:30.343 node: explicitly disabled via build config 00:04:30.343 00:04:30.343 drivers: 00:04:30.343 common/cpt: not in enabled drivers build config 00:04:30.343 common/dpaax: not in enabled drivers build config 00:04:30.343 common/iavf: not in enabled drivers build config 00:04:30.343 common/idpf: not in enabled drivers build config 00:04:30.343 common/ionic: not in enabled drivers build config 00:04:30.343 common/mvep: not in enabled drivers build config 00:04:30.343 common/octeontx: not in enabled drivers build config 00:04:30.343 bus/auxiliary: not in enabled drivers build config 00:04:30.343 bus/cdx: not in enabled drivers build config 00:04:30.343 bus/dpaa: not in enabled drivers build config 00:04:30.343 bus/fslmc: not in enabled drivers build config 00:04:30.343 bus/ifpga: not in enabled drivers build config 00:04:30.343 bus/platform: not in enabled drivers build config 00:04:30.343 bus/uacce: not in enabled drivers build config 00:04:30.343 bus/vmbus: not in enabled drivers build config 00:04:30.343 common/cnxk: not in enabled drivers build config 00:04:30.343 common/mlx5: not in enabled drivers build config 00:04:30.343 common/nfp: not in enabled drivers build config 00:04:30.343 common/nitrox: not in enabled drivers build config 00:04:30.343 common/qat: not in enabled drivers build config 00:04:30.343 common/sfc_efx: not in enabled drivers build config 00:04:30.343 mempool/bucket: not in enabled drivers build config 00:04:30.343 mempool/cnxk: not in enabled drivers build config 00:04:30.343 mempool/dpaa: not in enabled drivers build config 00:04:30.343 mempool/dpaa2: not in enabled drivers build config 00:04:30.343 mempool/octeontx: not in enabled drivers build config 00:04:30.343 mempool/stack: not in enabled drivers build config 00:04:30.343 dma/cnxk: not in enabled drivers build config 00:04:30.343 dma/dpaa: not in enabled drivers build config 00:04:30.343 dma/dpaa2: not in enabled drivers build config 00:04:30.343 dma/hisilicon: not in enabled drivers build config 00:04:30.343 dma/idxd: not in enabled drivers build config 00:04:30.343 dma/ioat: not in enabled drivers build config 00:04:30.343 dma/skeleton: not in enabled drivers build config 00:04:30.343 net/af_packet: not in enabled drivers build config 00:04:30.343 net/af_xdp: not in enabled drivers build config 00:04:30.343 net/ark: not in enabled drivers build config 00:04:30.343 net/atlantic: not in enabled drivers build config 00:04:30.343 net/avp: not in enabled drivers build config 00:04:30.343 net/axgbe: not in enabled drivers build config 00:04:30.343 net/bnx2x: not in enabled drivers build config 00:04:30.343 net/bnxt: not in enabled drivers build config 00:04:30.343 net/bonding: not in enabled drivers build config 00:04:30.343 net/cnxk: not in enabled drivers build config 00:04:30.343 net/cpfl: not in enabled drivers build config 00:04:30.343 net/cxgbe: not in enabled drivers build config 00:04:30.343 net/dpaa: not in enabled drivers build config 00:04:30.343 net/dpaa2: not in enabled drivers build config 00:04:30.343 net/e1000: not in enabled drivers build config 00:04:30.343 net/ena: not in enabled drivers build config 00:04:30.343 net/enetc: not in enabled drivers build config 00:04:30.343 net/enetfec: not in enabled drivers build config 00:04:30.343 net/enic: not in enabled drivers build config 00:04:30.343 net/failsafe: not in enabled drivers build config 00:04:30.343 net/fm10k: not in enabled drivers build config 00:04:30.343 net/gve: not in enabled drivers build config 00:04:30.343 net/hinic: not in enabled drivers build config 00:04:30.343 net/hns3: not in enabled drivers build config 00:04:30.343 net/i40e: not in enabled drivers build config 00:04:30.343 net/iavf: not in enabled drivers build config 00:04:30.343 net/ice: not in enabled drivers build config 00:04:30.343 net/idpf: not in enabled drivers build config 00:04:30.343 net/igc: not in enabled drivers build config 00:04:30.343 net/ionic: not in enabled drivers build config 00:04:30.343 net/ipn3ke: not in enabled drivers build config 00:04:30.343 net/ixgbe: not in enabled drivers build config 00:04:30.343 net/mana: not in enabled drivers build config 00:04:30.343 net/memif: not in enabled drivers build config 00:04:30.343 net/mlx4: not in enabled drivers build config 00:04:30.343 net/mlx5: not in enabled drivers build config 00:04:30.343 net/mvneta: not in enabled drivers build config 00:04:30.343 net/mvpp2: not in enabled drivers build config 00:04:30.343 net/netvsc: not in enabled drivers build config 00:04:30.343 net/nfb: not in enabled drivers build config 00:04:30.343 net/nfp: not in enabled drivers build config 00:04:30.343 net/ngbe: not in enabled drivers build config 00:04:30.343 net/null: not in enabled drivers build config 00:04:30.343 net/octeontx: not in enabled drivers build config 00:04:30.343 net/octeon_ep: not in enabled drivers build config 00:04:30.343 net/pcap: not in enabled drivers build config 00:04:30.343 net/pfe: not in enabled drivers build config 00:04:30.343 net/qede: not in enabled drivers build config 00:04:30.343 net/ring: not in enabled drivers build config 00:04:30.343 net/sfc: not in enabled drivers build config 00:04:30.343 net/softnic: not in enabled drivers build config 00:04:30.343 net/tap: not in enabled drivers build config 00:04:30.343 net/thunderx: not in enabled drivers build config 00:04:30.343 net/txgbe: not in enabled drivers build config 00:04:30.343 net/vdev_netvsc: not in enabled drivers build config 00:04:30.343 net/vhost: not in enabled drivers build config 00:04:30.343 net/virtio: not in enabled drivers build config 00:04:30.343 net/vmxnet3: not in enabled drivers build config 00:04:30.343 raw/*: missing internal dependency, "rawdev" 00:04:30.343 crypto/armv8: not in enabled drivers build config 00:04:30.343 crypto/bcmfs: not in enabled drivers build config 00:04:30.343 crypto/caam_jr: not in enabled drivers build config 00:04:30.343 crypto/ccp: not in enabled drivers build config 00:04:30.343 crypto/cnxk: not in enabled drivers build config 00:04:30.343 crypto/dpaa_sec: not in enabled drivers build config 00:04:30.343 crypto/dpaa2_sec: not in enabled drivers build config 00:04:30.343 crypto/ipsec_mb: not in enabled drivers build config 00:04:30.343 crypto/mlx5: not in enabled drivers build config 00:04:30.343 crypto/mvsam: not in enabled drivers build config 00:04:30.343 crypto/nitrox: not in enabled drivers build config 00:04:30.343 crypto/null: not in enabled drivers build config 00:04:30.343 crypto/octeontx: not in enabled drivers build config 00:04:30.343 crypto/openssl: not in enabled drivers build config 00:04:30.343 crypto/scheduler: not in enabled drivers build config 00:04:30.343 crypto/uadk: not in enabled drivers build config 00:04:30.343 crypto/virtio: not in enabled drivers build config 00:04:30.343 compress/isal: not in enabled drivers build config 00:04:30.343 compress/mlx5: not in enabled drivers build config 00:04:30.343 compress/nitrox: not in enabled drivers build config 00:04:30.343 compress/octeontx: not in enabled drivers build config 00:04:30.343 compress/zlib: not in enabled drivers build config 00:04:30.344 regex/*: missing internal dependency, "regexdev" 00:04:30.344 ml/*: missing internal dependency, "mldev" 00:04:30.344 vdpa/ifc: not in enabled drivers build config 00:04:30.344 vdpa/mlx5: not in enabled drivers build config 00:04:30.344 vdpa/nfp: not in enabled drivers build config 00:04:30.344 vdpa/sfc: not in enabled drivers build config 00:04:30.344 event/*: missing internal dependency, "eventdev" 00:04:30.344 baseband/*: missing internal dependency, "bbdev" 00:04:30.344 gpu/*: missing internal dependency, "gpudev" 00:04:30.344 00:04:30.344 00:04:30.344 Build targets in project: 85 00:04:30.344 00:04:30.344 DPDK 24.03.0 00:04:30.344 00:04:30.344 User defined options 00:04:30.344 buildtype : debug 00:04:30.344 default_library : shared 00:04:30.344 libdir : lib 00:04:30.344 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:30.344 b_sanitize : address 00:04:30.344 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:30.344 c_link_args : 00:04:30.344 cpu_instruction_set: native 00:04:30.344 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:30.344 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:30.344 enable_docs : false 00:04:30.344 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:30.344 enable_kmods : false 00:04:30.344 max_lcores : 128 00:04:30.344 tests : false 00:04:30.344 00:04:30.344 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:30.912 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:30.912 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:30.912 [2/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:30.912 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:30.912 [4/268] Linking static target lib/librte_kvargs.a 00:04:30.912 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:30.912 [6/268] Linking static target lib/librte_log.a 00:04:31.483 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.483 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:31.483 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:31.483 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:31.741 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:31.741 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:31.741 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:31.741 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:31.741 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:31.999 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.999 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:31.999 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:31.999 [19/268] Linking static target lib/librte_telemetry.a 00:04:31.999 [20/268] Linking target lib/librte_log.so.24.1 00:04:32.257 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:32.516 [22/268] Linking target lib/librte_kvargs.so.24.1 00:04:32.516 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:32.516 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:32.774 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:32.774 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:32.774 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:32.774 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:32.774 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:32.774 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:32.774 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:33.034 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.034 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:33.034 [34/268] Linking target lib/librte_telemetry.so.24.1 00:04:33.034 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:33.293 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:33.293 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:33.293 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:33.551 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:33.809 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:33.809 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:33.809 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:33.809 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:33.809 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:33.809 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:34.068 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:34.068 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:34.068 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:34.325 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:34.325 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:34.325 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:34.583 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:34.842 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:34.842 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:34.842 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:34.842 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:35.100 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:35.100 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:35.100 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:35.358 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:35.359 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:35.359 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:35.617 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:35.617 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:35.617 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:35.617 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:35.875 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:36.133 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:36.133 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:36.133 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:36.392 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:36.392 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:36.392 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:36.651 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:36.651 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:36.651 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:36.651 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:36.651 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:36.651 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:36.651 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:36.910 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:37.168 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:37.168 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:37.427 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:37.427 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:37.427 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:37.427 [87/268] Linking static target lib/librte_ring.a 00:04:37.427 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:37.692 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:37.692 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:37.692 [91/268] Linking static target lib/librte_rcu.a 00:04:37.692 [92/268] Linking static target lib/librte_eal.a 00:04:37.692 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:38.008 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:38.008 [95/268] Linking static target lib/librte_mempool.a 00:04:38.008 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:38.008 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.279 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.279 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:38.279 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:38.279 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:38.538 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:38.538 [103/268] Linking static target lib/librte_mbuf.a 00:04:38.538 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:38.538 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:38.797 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:38.797 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:38.797 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:38.797 [109/268] Linking static target lib/librte_meter.a 00:04:39.055 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:39.055 [111/268] Linking static target lib/librte_net.a 00:04:39.055 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.314 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:39.314 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.314 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:39.314 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:39.573 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.573 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:39.573 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.140 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:40.140 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:40.140 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:40.707 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:40.707 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:40.966 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:40.966 [126/268] Linking static target lib/librte_pci.a 00:04:40.966 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:40.966 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:41.224 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:41.224 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:41.224 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:41.224 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:41.224 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:41.224 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.224 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:41.482 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:41.482 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:41.482 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:41.482 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:41.482 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:41.482 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:41.482 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:41.482 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:41.482 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:41.483 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:41.741 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:42.000 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:42.000 [148/268] Linking static target lib/librte_cmdline.a 00:04:42.000 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:42.259 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:42.259 [151/268] Linking static target lib/librte_timer.a 00:04:42.259 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:42.259 [153/268] Linking static target lib/librte_ethdev.a 00:04:42.259 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:42.518 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:42.518 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:42.776 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:42.776 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:42.776 [159/268] Linking static target lib/librte_compressdev.a 00:04:43.035 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:43.035 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:43.035 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:43.294 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:43.294 [164/268] Linking static target lib/librte_hash.a 00:04:43.294 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:43.552 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:43.552 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:43.810 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:43.810 [169/268] Linking static target lib/librte_dmadev.a 00:04:43.810 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:43.810 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:43.810 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:44.068 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:44.068 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:44.326 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:44.584 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:44.584 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:44.584 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:44.584 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:44.584 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:44.843 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:44.843 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:45.102 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:45.102 [184/268] Linking static target lib/librte_cryptodev.a 00:04:45.102 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:45.102 [186/268] Linking static target lib/librte_power.a 00:04:45.669 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:45.669 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:45.669 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:45.669 [190/268] Linking static target lib/librte_reorder.a 00:04:45.669 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:45.669 [192/268] Linking static target lib/librte_security.a 00:04:45.927 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:46.186 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:46.186 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.444 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.444 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.703 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:46.961 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:46.961 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:47.219 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:47.219 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:47.477 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:47.477 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:47.736 [205/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:47.736 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:47.736 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:47.994 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:47.994 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:47.994 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:48.251 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:48.251 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:48.251 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:48.251 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:48.251 [215/268] Linking static target drivers/librte_bus_vdev.a 00:04:48.508 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:48.508 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:48.508 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:48.508 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:48.508 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:48.508 [221/268] Linking static target drivers/librte_bus_pci.a 00:04:48.765 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:48.765 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.765 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:48.765 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:48.765 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:49.023 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:49.954 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:49.954 [229/268] Linking target lib/librte_eal.so.24.1 00:04:49.954 [230/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:49.954 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:50.212 [232/268] Linking target lib/librte_meter.so.24.1 00:04:50.212 [233/268] Linking target lib/librte_ring.so.24.1 00:04:50.212 [234/268] Linking target lib/librte_pci.so.24.1 00:04:50.212 [235/268] Linking target lib/librte_timer.so.24.1 00:04:50.212 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:50.212 [237/268] Linking target lib/librte_dmadev.so.24.1 00:04:50.212 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:50.212 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:50.212 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:50.212 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:50.212 [242/268] Linking target lib/librte_rcu.so.24.1 00:04:50.212 [243/268] Linking target lib/librte_mempool.so.24.1 00:04:50.212 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:50.212 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:50.471 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:50.471 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:50.471 [248/268] Linking target lib/librte_mbuf.so.24.1 00:04:50.471 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:50.729 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:50.729 [251/268] Linking target lib/librte_reorder.so.24.1 00:04:50.729 [252/268] Linking target lib/librte_net.so.24.1 00:04:50.729 [253/268] Linking target lib/librte_compressdev.so.24.1 00:04:50.729 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:04:50.729 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:50.729 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:50.987 [257/268] Linking target lib/librte_cmdline.so.24.1 00:04:50.987 [258/268] Linking target lib/librte_security.so.24.1 00:04:50.987 [259/268] Linking target lib/librte_hash.so.24.1 00:04:50.987 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:50.987 [261/268] Linking target lib/librte_ethdev.so.24.1 00:04:50.987 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:51.245 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:51.245 [264/268] Linking target lib/librte_power.so.24.1 00:04:54.540 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:54.540 [266/268] Linking static target lib/librte_vhost.a 00:04:56.441 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.441 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:56.441 INFO: autodetecting backend as ninja 00:04:56.441 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:22.976 CC lib/ut/ut.o 00:05:22.976 CC lib/ut_mock/mock.o 00:05:22.976 CC lib/log/log_flags.o 00:05:22.976 CC lib/log/log_deprecated.o 00:05:22.976 CC lib/log/log.o 00:05:22.976 LIB libspdk_ut.a 00:05:22.976 LIB libspdk_ut_mock.a 00:05:22.976 SO libspdk_ut.so.2.0 00:05:22.976 SO libspdk_ut_mock.so.6.0 00:05:22.976 LIB libspdk_log.a 00:05:22.976 SYMLINK libspdk_ut_mock.so 00:05:22.976 SO libspdk_log.so.7.1 00:05:22.976 SYMLINK libspdk_ut.so 00:05:22.976 SYMLINK libspdk_log.so 00:05:22.976 CC lib/util/base64.o 00:05:22.976 CC lib/util/bit_array.o 00:05:22.976 CC lib/util/cpuset.o 00:05:22.976 CC lib/util/crc16.o 00:05:22.976 CC lib/util/crc32.o 00:05:22.976 CC lib/util/crc32c.o 00:05:22.976 CC lib/ioat/ioat.o 00:05:22.976 CC lib/dma/dma.o 00:05:22.976 CXX lib/trace_parser/trace.o 00:05:22.976 CC lib/vfio_user/host/vfio_user_pci.o 00:05:22.976 CC lib/vfio_user/host/vfio_user.o 00:05:22.977 CC lib/util/crc32_ieee.o 00:05:22.977 CC lib/util/crc64.o 00:05:22.977 CC lib/util/dif.o 00:05:22.977 CC lib/util/fd.o 00:05:22.977 LIB libspdk_dma.a 00:05:22.977 CC lib/util/fd_group.o 00:05:22.977 SO libspdk_dma.so.5.0 00:05:22.977 CC lib/util/file.o 00:05:22.977 CC lib/util/hexlify.o 00:05:22.977 SYMLINK libspdk_dma.so 00:05:22.977 CC lib/util/iov.o 00:05:22.977 CC lib/util/math.o 00:05:22.977 LIB libspdk_ioat.a 00:05:22.977 SO libspdk_ioat.so.7.0 00:05:22.977 CC lib/util/net.o 00:05:22.977 LIB libspdk_vfio_user.a 00:05:22.977 SYMLINK libspdk_ioat.so 00:05:22.977 CC lib/util/pipe.o 00:05:22.977 SO libspdk_vfio_user.so.5.0 00:05:22.977 CC lib/util/strerror_tls.o 00:05:22.977 CC lib/util/string.o 00:05:22.977 CC lib/util/uuid.o 00:05:22.977 SYMLINK libspdk_vfio_user.so 00:05:22.977 CC lib/util/xor.o 00:05:22.977 CC lib/util/zipf.o 00:05:22.977 CC lib/util/md5.o 00:05:22.977 LIB libspdk_util.a 00:05:22.977 SO libspdk_util.so.10.1 00:05:22.977 SYMLINK libspdk_util.so 00:05:22.977 LIB libspdk_trace_parser.a 00:05:22.977 SO libspdk_trace_parser.so.6.0 00:05:22.977 SYMLINK libspdk_trace_parser.so 00:05:22.977 CC lib/conf/conf.o 00:05:22.977 CC lib/env_dpdk/env.o 00:05:22.977 CC lib/json/json_parse.o 00:05:22.977 CC lib/env_dpdk/memory.o 00:05:22.977 CC lib/rdma_utils/rdma_utils.o 00:05:22.977 CC lib/json/json_util.o 00:05:22.977 CC lib/json/json_write.o 00:05:22.977 CC lib/env_dpdk/pci.o 00:05:22.977 CC lib/vmd/vmd.o 00:05:22.977 CC lib/idxd/idxd.o 00:05:23.235 LIB libspdk_conf.a 00:05:23.235 SO libspdk_conf.so.6.0 00:05:23.235 CC lib/idxd/idxd_user.o 00:05:23.235 CC lib/idxd/idxd_kernel.o 00:05:23.235 SYMLINK libspdk_conf.so 00:05:23.235 LIB libspdk_rdma_utils.a 00:05:23.235 CC lib/env_dpdk/init.o 00:05:23.235 SO libspdk_rdma_utils.so.1.0 00:05:23.235 LIB libspdk_json.a 00:05:23.493 SO libspdk_json.so.6.0 00:05:23.493 SYMLINK libspdk_rdma_utils.so 00:05:23.493 CC lib/vmd/led.o 00:05:23.493 SYMLINK libspdk_json.so 00:05:23.493 CC lib/env_dpdk/threads.o 00:05:23.493 CC lib/env_dpdk/pci_ioat.o 00:05:23.493 CC lib/env_dpdk/pci_virtio.o 00:05:23.493 CC lib/rdma_provider/common.o 00:05:23.752 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:23.752 CC lib/env_dpdk/pci_vmd.o 00:05:23.752 CC lib/env_dpdk/pci_idxd.o 00:05:23.752 CC lib/env_dpdk/pci_event.o 00:05:23.752 CC lib/jsonrpc/jsonrpc_server.o 00:05:23.752 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:23.752 CC lib/env_dpdk/sigbus_handler.o 00:05:23.752 CC lib/env_dpdk/pci_dpdk.o 00:05:23.752 LIB libspdk_idxd.a 00:05:24.010 LIB libspdk_rdma_provider.a 00:05:24.010 SO libspdk_rdma_provider.so.7.0 00:05:24.010 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:24.010 SO libspdk_idxd.so.12.1 00:05:24.010 LIB libspdk_vmd.a 00:05:24.010 SYMLINK libspdk_rdma_provider.so 00:05:24.010 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:24.010 SO libspdk_vmd.so.6.0 00:05:24.010 CC lib/jsonrpc/jsonrpc_client.o 00:05:24.010 SYMLINK libspdk_idxd.so 00:05:24.010 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:24.010 SYMLINK libspdk_vmd.so 00:05:24.268 LIB libspdk_jsonrpc.a 00:05:24.268 SO libspdk_jsonrpc.so.6.0 00:05:24.525 SYMLINK libspdk_jsonrpc.so 00:05:24.784 CC lib/rpc/rpc.o 00:05:25.041 LIB libspdk_rpc.a 00:05:25.041 SO libspdk_rpc.so.6.0 00:05:25.041 SYMLINK libspdk_rpc.so 00:05:25.041 LIB libspdk_env_dpdk.a 00:05:25.299 SO libspdk_env_dpdk.so.15.1 00:05:25.299 CC lib/trace/trace.o 00:05:25.299 CC lib/trace/trace_flags.o 00:05:25.299 CC lib/trace/trace_rpc.o 00:05:25.299 CC lib/notify/notify.o 00:05:25.299 CC lib/notify/notify_rpc.o 00:05:25.299 CC lib/keyring/keyring.o 00:05:25.299 CC lib/keyring/keyring_rpc.o 00:05:25.299 SYMLINK libspdk_env_dpdk.so 00:05:25.558 LIB libspdk_notify.a 00:05:25.558 SO libspdk_notify.so.6.0 00:05:25.558 SYMLINK libspdk_notify.so 00:05:25.558 LIB libspdk_trace.a 00:05:25.558 LIB libspdk_keyring.a 00:05:25.558 SO libspdk_trace.so.11.0 00:05:25.817 SO libspdk_keyring.so.2.0 00:05:25.817 SYMLINK libspdk_keyring.so 00:05:25.817 SYMLINK libspdk_trace.so 00:05:26.078 CC lib/thread/thread.o 00:05:26.078 CC lib/thread/iobuf.o 00:05:26.078 CC lib/sock/sock.o 00:05:26.078 CC lib/sock/sock_rpc.o 00:05:26.646 LIB libspdk_sock.a 00:05:26.646 SO libspdk_sock.so.10.0 00:05:26.904 SYMLINK libspdk_sock.so 00:05:27.161 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:27.161 CC lib/nvme/nvme_fabric.o 00:05:27.161 CC lib/nvme/nvme_ns_cmd.o 00:05:27.161 CC lib/nvme/nvme_ctrlr.o 00:05:27.161 CC lib/nvme/nvme_ns.o 00:05:27.161 CC lib/nvme/nvme_pcie_common.o 00:05:27.161 CC lib/nvme/nvme_pcie.o 00:05:27.161 CC lib/nvme/nvme_qpair.o 00:05:27.161 CC lib/nvme/nvme.o 00:05:28.095 CC lib/nvme/nvme_quirks.o 00:05:28.095 CC lib/nvme/nvme_transport.o 00:05:28.095 CC lib/nvme/nvme_discovery.o 00:05:28.095 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:28.095 LIB libspdk_thread.a 00:05:28.095 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:28.354 CC lib/nvme/nvme_tcp.o 00:05:28.354 SO libspdk_thread.so.11.0 00:05:28.354 CC lib/nvme/nvme_opal.o 00:05:28.354 SYMLINK libspdk_thread.so 00:05:28.354 CC lib/nvme/nvme_io_msg.o 00:05:28.612 CC lib/nvme/nvme_poll_group.o 00:05:28.612 CC lib/nvme/nvme_zns.o 00:05:28.871 CC lib/nvme/nvme_stubs.o 00:05:28.871 CC lib/nvme/nvme_auth.o 00:05:28.871 CC lib/nvme/nvme_cuse.o 00:05:29.130 CC lib/nvme/nvme_rdma.o 00:05:29.130 CC lib/accel/accel.o 00:05:29.130 CC lib/blob/blobstore.o 00:05:29.389 CC lib/accel/accel_rpc.o 00:05:29.389 CC lib/init/json_config.o 00:05:29.648 CC lib/virtio/virtio.o 00:05:29.648 CC lib/virtio/virtio_vhost_user.o 00:05:29.906 CC lib/init/subsystem.o 00:05:29.906 CC lib/virtio/virtio_vfio_user.o 00:05:29.906 CC lib/init/subsystem_rpc.o 00:05:30.164 CC lib/accel/accel_sw.o 00:05:30.164 CC lib/blob/request.o 00:05:30.164 CC lib/init/rpc.o 00:05:30.164 CC lib/blob/zeroes.o 00:05:30.164 CC lib/fsdev/fsdev.o 00:05:30.164 CC lib/virtio/virtio_pci.o 00:05:30.422 LIB libspdk_init.a 00:05:30.422 CC lib/blob/blob_bs_dev.o 00:05:30.422 CC lib/fsdev/fsdev_io.o 00:05:30.422 SO libspdk_init.so.6.0 00:05:30.422 CC lib/fsdev/fsdev_rpc.o 00:05:30.422 SYMLINK libspdk_init.so 00:05:30.681 LIB libspdk_accel.a 00:05:30.681 LIB libspdk_virtio.a 00:05:30.681 CC lib/event/app.o 00:05:30.681 CC lib/event/log_rpc.o 00:05:30.681 CC lib/event/reactor.o 00:05:30.681 CC lib/event/app_rpc.o 00:05:30.681 SO libspdk_virtio.so.7.0 00:05:30.681 SO libspdk_accel.so.16.0 00:05:30.681 SYMLINK libspdk_virtio.so 00:05:30.681 CC lib/event/scheduler_static.o 00:05:30.681 SYMLINK libspdk_accel.so 00:05:30.940 LIB libspdk_nvme.a 00:05:30.940 CC lib/bdev/bdev_rpc.o 00:05:30.940 CC lib/bdev/bdev.o 00:05:30.940 CC lib/bdev/part.o 00:05:30.940 CC lib/bdev/scsi_nvme.o 00:05:30.940 CC lib/bdev/bdev_zone.o 00:05:31.198 LIB libspdk_fsdev.a 00:05:31.198 SO libspdk_nvme.so.15.0 00:05:31.198 SO libspdk_fsdev.so.2.0 00:05:31.198 SYMLINK libspdk_fsdev.so 00:05:31.198 LIB libspdk_event.a 00:05:31.456 SO libspdk_event.so.14.0 00:05:31.456 SYMLINK libspdk_nvme.so 00:05:31.456 SYMLINK libspdk_event.so 00:05:31.456 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:32.391 LIB libspdk_fuse_dispatcher.a 00:05:32.391 SO libspdk_fuse_dispatcher.so.1.0 00:05:32.391 SYMLINK libspdk_fuse_dispatcher.so 00:05:33.819 LIB libspdk_blob.a 00:05:33.819 SO libspdk_blob.so.12.0 00:05:34.078 SYMLINK libspdk_blob.so 00:05:34.337 CC lib/blobfs/blobfs.o 00:05:34.337 CC lib/blobfs/tree.o 00:05:34.337 CC lib/lvol/lvol.o 00:05:34.918 LIB libspdk_bdev.a 00:05:34.918 SO libspdk_bdev.so.17.0 00:05:35.191 SYMLINK libspdk_bdev.so 00:05:35.451 CC lib/ftl/ftl_core.o 00:05:35.451 CC lib/ftl/ftl_init.o 00:05:35.451 CC lib/ftl/ftl_layout.o 00:05:35.451 CC lib/ftl/ftl_debug.o 00:05:35.451 CC lib/scsi/dev.o 00:05:35.451 CC lib/nvmf/ctrlr.o 00:05:35.451 CC lib/ublk/ublk.o 00:05:35.451 CC lib/nbd/nbd.o 00:05:35.451 LIB libspdk_blobfs.a 00:05:35.451 SO libspdk_blobfs.so.11.0 00:05:35.710 SYMLINK libspdk_blobfs.so 00:05:35.710 CC lib/nbd/nbd_rpc.o 00:05:35.710 CC lib/scsi/lun.o 00:05:35.710 CC lib/ublk/ublk_rpc.o 00:05:35.710 CC lib/ftl/ftl_io.o 00:05:35.710 LIB libspdk_lvol.a 00:05:35.710 CC lib/ftl/ftl_sb.o 00:05:35.710 SO libspdk_lvol.so.11.0 00:05:35.969 CC lib/ftl/ftl_l2p.o 00:05:35.969 CC lib/ftl/ftl_l2p_flat.o 00:05:35.969 SYMLINK libspdk_lvol.so 00:05:35.969 CC lib/ftl/ftl_nv_cache.o 00:05:35.969 CC lib/ftl/ftl_band.o 00:05:35.969 CC lib/ftl/ftl_band_ops.o 00:05:35.969 LIB libspdk_nbd.a 00:05:35.969 SO libspdk_nbd.so.7.0 00:05:35.969 CC lib/nvmf/ctrlr_discovery.o 00:05:35.969 CC lib/scsi/port.o 00:05:36.229 SYMLINK libspdk_nbd.so 00:05:36.230 CC lib/scsi/scsi.o 00:05:36.230 CC lib/scsi/scsi_bdev.o 00:05:36.230 CC lib/ftl/ftl_writer.o 00:05:36.230 CC lib/ftl/ftl_rq.o 00:05:36.230 LIB libspdk_ublk.a 00:05:36.230 SO libspdk_ublk.so.3.0 00:05:36.230 CC lib/ftl/ftl_reloc.o 00:05:36.230 SYMLINK libspdk_ublk.so 00:05:36.230 CC lib/ftl/ftl_l2p_cache.o 00:05:36.488 CC lib/ftl/ftl_p2l.o 00:05:36.488 CC lib/ftl/ftl_p2l_log.o 00:05:36.488 CC lib/ftl/mngt/ftl_mngt.o 00:05:36.488 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:36.747 CC lib/nvmf/ctrlr_bdev.o 00:05:36.747 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:36.747 CC lib/scsi/scsi_pr.o 00:05:36.747 CC lib/scsi/scsi_rpc.o 00:05:36.747 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:36.747 CC lib/scsi/task.o 00:05:36.747 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:37.006 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:37.006 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:37.006 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:37.006 CC lib/nvmf/subsystem.o 00:05:37.264 CC lib/nvmf/nvmf.o 00:05:37.264 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:37.264 LIB libspdk_scsi.a 00:05:37.264 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:37.264 SO libspdk_scsi.so.9.0 00:05:37.264 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:37.264 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:37.264 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:37.264 SYMLINK libspdk_scsi.so 00:05:37.264 CC lib/ftl/utils/ftl_conf.o 00:05:37.522 CC lib/ftl/utils/ftl_md.o 00:05:37.522 CC lib/nvmf/nvmf_rpc.o 00:05:37.522 CC lib/ftl/utils/ftl_mempool.o 00:05:37.523 CC lib/ftl/utils/ftl_bitmap.o 00:05:37.523 CC lib/ftl/utils/ftl_property.o 00:05:37.523 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:37.781 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:37.781 CC lib/nvmf/transport.o 00:05:37.781 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:37.781 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:37.781 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:38.040 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:38.040 CC lib/iscsi/conn.o 00:05:38.040 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:38.040 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:38.040 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:38.299 CC lib/vhost/vhost.o 00:05:38.299 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:38.299 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:38.299 CC lib/iscsi/init_grp.o 00:05:38.299 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:38.299 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:38.558 CC lib/nvmf/tcp.o 00:05:38.558 CC lib/ftl/base/ftl_base_dev.o 00:05:38.558 CC lib/nvmf/stubs.o 00:05:38.558 CC lib/vhost/vhost_rpc.o 00:05:38.558 CC lib/iscsi/iscsi.o 00:05:38.558 CC lib/ftl/base/ftl_base_bdev.o 00:05:38.558 CC lib/ftl/ftl_trace.o 00:05:38.818 CC lib/vhost/vhost_scsi.o 00:05:38.818 CC lib/vhost/vhost_blk.o 00:05:38.818 CC lib/vhost/rte_vhost_user.o 00:05:38.818 CC lib/iscsi/param.o 00:05:38.818 LIB libspdk_ftl.a 00:05:39.076 CC lib/nvmf/mdns_server.o 00:05:39.335 CC lib/nvmf/rdma.o 00:05:39.335 SO libspdk_ftl.so.9.0 00:05:39.335 CC lib/nvmf/auth.o 00:05:39.335 CC lib/iscsi/portal_grp.o 00:05:39.593 SYMLINK libspdk_ftl.so 00:05:39.593 CC lib/iscsi/tgt_node.o 00:05:39.593 CC lib/iscsi/iscsi_subsystem.o 00:05:39.852 CC lib/iscsi/iscsi_rpc.o 00:05:39.852 CC lib/iscsi/task.o 00:05:40.112 LIB libspdk_vhost.a 00:05:40.112 SO libspdk_vhost.so.8.0 00:05:40.371 SYMLINK libspdk_vhost.so 00:05:40.630 LIB libspdk_iscsi.a 00:05:40.630 SO libspdk_iscsi.so.8.0 00:05:40.889 SYMLINK libspdk_iscsi.so 00:05:42.266 LIB libspdk_nvmf.a 00:05:42.525 SO libspdk_nvmf.so.20.0 00:05:42.783 SYMLINK libspdk_nvmf.so 00:05:43.042 CC module/env_dpdk/env_dpdk_rpc.o 00:05:43.301 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:43.301 CC module/scheduler/gscheduler/gscheduler.o 00:05:43.301 CC module/keyring/file/keyring.o 00:05:43.301 CC module/accel/ioat/accel_ioat.o 00:05:43.301 CC module/fsdev/aio/fsdev_aio.o 00:05:43.301 CC module/accel/error/accel_error.o 00:05:43.301 CC module/blob/bdev/blob_bdev.o 00:05:43.301 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:43.301 CC module/sock/posix/posix.o 00:05:43.301 LIB libspdk_env_dpdk_rpc.a 00:05:43.301 SO libspdk_env_dpdk_rpc.so.6.0 00:05:43.301 SYMLINK libspdk_env_dpdk_rpc.so 00:05:43.301 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:43.301 LIB libspdk_scheduler_gscheduler.a 00:05:43.301 CC module/keyring/file/keyring_rpc.o 00:05:43.301 SO libspdk_scheduler_gscheduler.so.4.0 00:05:43.559 CC module/accel/ioat/accel_ioat_rpc.o 00:05:43.559 CC module/accel/error/accel_error_rpc.o 00:05:43.559 LIB libspdk_scheduler_dynamic.a 00:05:43.559 LIB libspdk_scheduler_dpdk_governor.a 00:05:43.559 SO libspdk_scheduler_dynamic.so.4.0 00:05:43.559 SYMLINK libspdk_scheduler_gscheduler.so 00:05:43.559 CC module/fsdev/aio/linux_aio_mgr.o 00:05:43.559 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:43.559 LIB libspdk_blob_bdev.a 00:05:43.559 SYMLINK libspdk_scheduler_dynamic.so 00:05:43.559 LIB libspdk_keyring_file.a 00:05:43.559 SO libspdk_blob_bdev.so.12.0 00:05:43.559 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:43.559 SO libspdk_keyring_file.so.2.0 00:05:43.559 LIB libspdk_accel_error.a 00:05:43.559 LIB libspdk_accel_ioat.a 00:05:43.559 SYMLINK libspdk_blob_bdev.so 00:05:43.559 SO libspdk_accel_error.so.2.0 00:05:43.559 SO libspdk_accel_ioat.so.6.0 00:05:43.818 SYMLINK libspdk_keyring_file.so 00:05:43.818 SYMLINK libspdk_accel_error.so 00:05:43.818 SYMLINK libspdk_accel_ioat.so 00:05:43.818 CC module/keyring/linux/keyring.o 00:05:43.818 CC module/keyring/linux/keyring_rpc.o 00:05:43.818 CC module/accel/dsa/accel_dsa.o 00:05:43.818 CC module/accel/dsa/accel_dsa_rpc.o 00:05:43.818 CC module/accel/iaa/accel_iaa.o 00:05:43.818 LIB libspdk_keyring_linux.a 00:05:44.081 SO libspdk_keyring_linux.so.1.0 00:05:44.081 CC module/blobfs/bdev/blobfs_bdev.o 00:05:44.081 CC module/bdev/delay/vbdev_delay.o 00:05:44.081 CC module/bdev/error/vbdev_error.o 00:05:44.081 SYMLINK libspdk_keyring_linux.so 00:05:44.081 CC module/bdev/error/vbdev_error_rpc.o 00:05:44.081 LIB libspdk_fsdev_aio.a 00:05:44.082 CC module/bdev/gpt/gpt.o 00:05:44.082 CC module/accel/iaa/accel_iaa_rpc.o 00:05:44.082 SO libspdk_fsdev_aio.so.1.0 00:05:44.082 LIB libspdk_accel_dsa.a 00:05:44.082 LIB libspdk_sock_posix.a 00:05:44.352 CC module/bdev/lvol/vbdev_lvol.o 00:05:44.352 SO libspdk_accel_dsa.so.5.0 00:05:44.352 SO libspdk_sock_posix.so.6.0 00:05:44.352 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:44.352 SYMLINK libspdk_fsdev_aio.so 00:05:44.352 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:44.352 SYMLINK libspdk_accel_dsa.so 00:05:44.352 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:44.352 LIB libspdk_accel_iaa.a 00:05:44.352 SYMLINK libspdk_sock_posix.so 00:05:44.352 CC module/bdev/gpt/vbdev_gpt.o 00:05:44.352 SO libspdk_accel_iaa.so.3.0 00:05:44.352 LIB libspdk_bdev_error.a 00:05:44.352 SYMLINK libspdk_accel_iaa.so 00:05:44.352 SO libspdk_bdev_error.so.6.0 00:05:44.352 LIB libspdk_blobfs_bdev.a 00:05:44.352 CC module/bdev/malloc/bdev_malloc.o 00:05:44.610 SO libspdk_blobfs_bdev.so.6.0 00:05:44.610 SYMLINK libspdk_bdev_error.so 00:05:44.610 CC module/bdev/null/bdev_null.o 00:05:44.610 SYMLINK libspdk_blobfs_bdev.so 00:05:44.610 CC module/bdev/null/bdev_null_rpc.o 00:05:44.610 LIB libspdk_bdev_delay.a 00:05:44.610 SO libspdk_bdev_delay.so.6.0 00:05:44.610 CC module/bdev/nvme/bdev_nvme.o 00:05:44.610 LIB libspdk_bdev_gpt.a 00:05:44.610 SYMLINK libspdk_bdev_delay.so 00:05:44.610 CC module/bdev/passthru/vbdev_passthru.o 00:05:44.610 CC module/bdev/raid/bdev_raid.o 00:05:44.610 SO libspdk_bdev_gpt.so.6.0 00:05:44.868 SYMLINK libspdk_bdev_gpt.so 00:05:44.868 CC module/bdev/split/vbdev_split.o 00:05:44.868 LIB libspdk_bdev_null.a 00:05:44.868 SO libspdk_bdev_null.so.6.0 00:05:44.868 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:44.868 CC module/bdev/aio/bdev_aio.o 00:05:44.868 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:44.868 LIB libspdk_bdev_lvol.a 00:05:44.868 CC module/bdev/ftl/bdev_ftl.o 00:05:45.125 SYMLINK libspdk_bdev_null.so 00:05:45.125 SO libspdk_bdev_lvol.so.6.0 00:05:45.125 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:45.125 SYMLINK libspdk_bdev_lvol.so 00:05:45.125 LIB libspdk_bdev_malloc.a 00:05:45.125 SO libspdk_bdev_malloc.so.6.0 00:05:45.125 CC module/bdev/split/vbdev_split_rpc.o 00:05:45.125 CC module/bdev/iscsi/bdev_iscsi.o 00:05:45.382 LIB libspdk_bdev_passthru.a 00:05:45.382 SYMLINK libspdk_bdev_malloc.so 00:05:45.382 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:45.382 SO libspdk_bdev_passthru.so.6.0 00:05:45.382 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:45.382 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:45.382 CC module/bdev/aio/bdev_aio_rpc.o 00:05:45.382 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:45.382 LIB libspdk_bdev_split.a 00:05:45.382 SYMLINK libspdk_bdev_passthru.so 00:05:45.382 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:45.382 SO libspdk_bdev_split.so.6.0 00:05:45.382 CC module/bdev/nvme/nvme_rpc.o 00:05:45.382 SYMLINK libspdk_bdev_split.so 00:05:45.382 CC module/bdev/nvme/bdev_mdns_client.o 00:05:45.641 LIB libspdk_bdev_aio.a 00:05:45.641 LIB libspdk_bdev_zone_block.a 00:05:45.641 SO libspdk_bdev_aio.so.6.0 00:05:45.641 SO libspdk_bdev_zone_block.so.6.0 00:05:45.641 LIB libspdk_bdev_ftl.a 00:05:45.641 SO libspdk_bdev_ftl.so.6.0 00:05:45.641 SYMLINK libspdk_bdev_aio.so 00:05:45.641 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:45.641 SYMLINK libspdk_bdev_zone_block.so 00:05:45.641 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:45.641 CC module/bdev/raid/bdev_raid_rpc.o 00:05:45.641 LIB libspdk_bdev_iscsi.a 00:05:45.641 SYMLINK libspdk_bdev_ftl.so 00:05:45.641 CC module/bdev/raid/bdev_raid_sb.o 00:05:45.641 SO libspdk_bdev_iscsi.so.6.0 00:05:45.641 CC module/bdev/nvme/vbdev_opal.o 00:05:45.900 SYMLINK libspdk_bdev_iscsi.so 00:05:45.900 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:45.900 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:45.900 CC module/bdev/raid/raid0.o 00:05:45.900 CC module/bdev/raid/raid1.o 00:05:45.900 LIB libspdk_bdev_virtio.a 00:05:46.159 SO libspdk_bdev_virtio.so.6.0 00:05:46.159 CC module/bdev/raid/concat.o 00:05:46.159 CC module/bdev/raid/raid5f.o 00:05:46.159 SYMLINK libspdk_bdev_virtio.so 00:05:46.726 LIB libspdk_bdev_raid.a 00:05:46.985 SO libspdk_bdev_raid.so.6.0 00:05:46.985 SYMLINK libspdk_bdev_raid.so 00:05:48.361 LIB libspdk_bdev_nvme.a 00:05:48.361 SO libspdk_bdev_nvme.so.7.1 00:05:48.620 SYMLINK libspdk_bdev_nvme.so 00:05:49.188 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:49.188 CC module/event/subsystems/scheduler/scheduler.o 00:05:49.188 CC module/event/subsystems/keyring/keyring.o 00:05:49.188 CC module/event/subsystems/sock/sock.o 00:05:49.188 CC module/event/subsystems/vmd/vmd.o 00:05:49.188 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:49.188 CC module/event/subsystems/iobuf/iobuf.o 00:05:49.188 CC module/event/subsystems/fsdev/fsdev.o 00:05:49.188 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:49.188 LIB libspdk_event_vhost_blk.a 00:05:49.188 LIB libspdk_event_sock.a 00:05:49.188 LIB libspdk_event_fsdev.a 00:05:49.188 SO libspdk_event_vhost_blk.so.3.0 00:05:49.188 LIB libspdk_event_scheduler.a 00:05:49.188 LIB libspdk_event_keyring.a 00:05:49.188 SO libspdk_event_fsdev.so.1.0 00:05:49.188 SO libspdk_event_sock.so.5.0 00:05:49.188 LIB libspdk_event_iobuf.a 00:05:49.188 SO libspdk_event_keyring.so.1.0 00:05:49.188 SO libspdk_event_scheduler.so.4.0 00:05:49.188 LIB libspdk_event_vmd.a 00:05:49.188 SO libspdk_event_iobuf.so.3.0 00:05:49.188 SYMLINK libspdk_event_vhost_blk.so 00:05:49.188 SYMLINK libspdk_event_fsdev.so 00:05:49.188 SYMLINK libspdk_event_keyring.so 00:05:49.188 SYMLINK libspdk_event_sock.so 00:05:49.188 SYMLINK libspdk_event_scheduler.so 00:05:49.188 SO libspdk_event_vmd.so.6.0 00:05:49.447 SYMLINK libspdk_event_iobuf.so 00:05:49.447 SYMLINK libspdk_event_vmd.so 00:05:49.705 CC module/event/subsystems/accel/accel.o 00:05:49.705 LIB libspdk_event_accel.a 00:05:49.705 SO libspdk_event_accel.so.6.0 00:05:49.964 SYMLINK libspdk_event_accel.so 00:05:50.222 CC module/event/subsystems/bdev/bdev.o 00:05:50.481 LIB libspdk_event_bdev.a 00:05:50.481 SO libspdk_event_bdev.so.6.0 00:05:50.481 SYMLINK libspdk_event_bdev.so 00:05:50.739 CC module/event/subsystems/nbd/nbd.o 00:05:50.739 CC module/event/subsystems/scsi/scsi.o 00:05:50.739 CC module/event/subsystems/ublk/ublk.o 00:05:50.739 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:50.739 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:50.998 LIB libspdk_event_ublk.a 00:05:50.998 LIB libspdk_event_scsi.a 00:05:50.998 SO libspdk_event_ublk.so.3.0 00:05:50.998 LIB libspdk_event_nbd.a 00:05:50.998 SO libspdk_event_scsi.so.6.0 00:05:50.998 SO libspdk_event_nbd.so.6.0 00:05:50.998 SYMLINK libspdk_event_ublk.so 00:05:50.998 LIB libspdk_event_nvmf.a 00:05:50.998 SYMLINK libspdk_event_scsi.so 00:05:50.998 SYMLINK libspdk_event_nbd.so 00:05:50.998 SO libspdk_event_nvmf.so.6.0 00:05:51.256 SYMLINK libspdk_event_nvmf.so 00:05:51.256 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:51.256 CC module/event/subsystems/iscsi/iscsi.o 00:05:51.515 LIB libspdk_event_vhost_scsi.a 00:05:51.515 LIB libspdk_event_iscsi.a 00:05:51.515 SO libspdk_event_iscsi.so.6.0 00:05:51.515 SO libspdk_event_vhost_scsi.so.3.0 00:05:51.515 SYMLINK libspdk_event_vhost_scsi.so 00:05:51.515 SYMLINK libspdk_event_iscsi.so 00:05:51.773 SO libspdk.so.6.0 00:05:51.773 SYMLINK libspdk.so 00:05:52.032 TEST_HEADER include/spdk/accel.h 00:05:52.032 TEST_HEADER include/spdk/accel_module.h 00:05:52.032 CC test/rpc_client/rpc_client_test.o 00:05:52.032 TEST_HEADER include/spdk/assert.h 00:05:52.032 TEST_HEADER include/spdk/barrier.h 00:05:52.032 TEST_HEADER include/spdk/base64.h 00:05:52.032 TEST_HEADER include/spdk/bdev.h 00:05:52.032 TEST_HEADER include/spdk/bdev_module.h 00:05:52.032 TEST_HEADER include/spdk/bdev_zone.h 00:05:52.032 TEST_HEADER include/spdk/bit_array.h 00:05:52.032 TEST_HEADER include/spdk/bit_pool.h 00:05:52.032 TEST_HEADER include/spdk/blob_bdev.h 00:05:52.032 CXX app/trace/trace.o 00:05:52.032 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:52.032 TEST_HEADER include/spdk/blobfs.h 00:05:52.032 TEST_HEADER include/spdk/blob.h 00:05:52.032 TEST_HEADER include/spdk/conf.h 00:05:52.032 TEST_HEADER include/spdk/config.h 00:05:52.032 TEST_HEADER include/spdk/cpuset.h 00:05:52.032 TEST_HEADER include/spdk/crc16.h 00:05:52.032 TEST_HEADER include/spdk/crc32.h 00:05:52.032 TEST_HEADER include/spdk/crc64.h 00:05:52.032 TEST_HEADER include/spdk/dif.h 00:05:52.032 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:52.032 TEST_HEADER include/spdk/dma.h 00:05:52.032 TEST_HEADER include/spdk/endian.h 00:05:52.032 TEST_HEADER include/spdk/env_dpdk.h 00:05:52.032 TEST_HEADER include/spdk/env.h 00:05:52.032 TEST_HEADER include/spdk/event.h 00:05:52.032 TEST_HEADER include/spdk/fd_group.h 00:05:52.032 TEST_HEADER include/spdk/fd.h 00:05:52.032 TEST_HEADER include/spdk/file.h 00:05:52.032 TEST_HEADER include/spdk/fsdev.h 00:05:52.032 TEST_HEADER include/spdk/fsdev_module.h 00:05:52.032 TEST_HEADER include/spdk/ftl.h 00:05:52.032 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:52.032 TEST_HEADER include/spdk/gpt_spec.h 00:05:52.032 TEST_HEADER include/spdk/hexlify.h 00:05:52.032 TEST_HEADER include/spdk/histogram_data.h 00:05:52.032 TEST_HEADER include/spdk/idxd.h 00:05:52.032 TEST_HEADER include/spdk/idxd_spec.h 00:05:52.032 TEST_HEADER include/spdk/init.h 00:05:52.032 TEST_HEADER include/spdk/ioat.h 00:05:52.032 TEST_HEADER include/spdk/ioat_spec.h 00:05:52.032 TEST_HEADER include/spdk/iscsi_spec.h 00:05:52.032 TEST_HEADER include/spdk/json.h 00:05:52.032 CC examples/util/zipf/zipf.o 00:05:52.032 TEST_HEADER include/spdk/jsonrpc.h 00:05:52.032 TEST_HEADER include/spdk/keyring.h 00:05:52.032 TEST_HEADER include/spdk/keyring_module.h 00:05:52.321 CC examples/ioat/perf/perf.o 00:05:52.321 TEST_HEADER include/spdk/likely.h 00:05:52.321 TEST_HEADER include/spdk/log.h 00:05:52.321 TEST_HEADER include/spdk/lvol.h 00:05:52.321 CC test/thread/poller_perf/poller_perf.o 00:05:52.321 TEST_HEADER include/spdk/md5.h 00:05:52.321 TEST_HEADER include/spdk/memory.h 00:05:52.321 TEST_HEADER include/spdk/mmio.h 00:05:52.321 TEST_HEADER include/spdk/nbd.h 00:05:52.321 TEST_HEADER include/spdk/net.h 00:05:52.321 TEST_HEADER include/spdk/notify.h 00:05:52.321 TEST_HEADER include/spdk/nvme.h 00:05:52.321 TEST_HEADER include/spdk/nvme_intel.h 00:05:52.321 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:52.321 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:52.321 TEST_HEADER include/spdk/nvme_spec.h 00:05:52.321 TEST_HEADER include/spdk/nvme_zns.h 00:05:52.321 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:52.321 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:52.321 TEST_HEADER include/spdk/nvmf.h 00:05:52.321 TEST_HEADER include/spdk/nvmf_spec.h 00:05:52.321 CC test/dma/test_dma/test_dma.o 00:05:52.321 TEST_HEADER include/spdk/nvmf_transport.h 00:05:52.321 TEST_HEADER include/spdk/opal.h 00:05:52.321 TEST_HEADER include/spdk/opal_spec.h 00:05:52.321 TEST_HEADER include/spdk/pci_ids.h 00:05:52.321 TEST_HEADER include/spdk/pipe.h 00:05:52.321 TEST_HEADER include/spdk/queue.h 00:05:52.321 TEST_HEADER include/spdk/reduce.h 00:05:52.321 TEST_HEADER include/spdk/rpc.h 00:05:52.321 TEST_HEADER include/spdk/scheduler.h 00:05:52.321 TEST_HEADER include/spdk/scsi.h 00:05:52.321 TEST_HEADER include/spdk/scsi_spec.h 00:05:52.321 TEST_HEADER include/spdk/sock.h 00:05:52.321 TEST_HEADER include/spdk/stdinc.h 00:05:52.321 CC test/app/bdev_svc/bdev_svc.o 00:05:52.321 TEST_HEADER include/spdk/string.h 00:05:52.321 TEST_HEADER include/spdk/thread.h 00:05:52.321 TEST_HEADER include/spdk/trace.h 00:05:52.321 TEST_HEADER include/spdk/trace_parser.h 00:05:52.321 TEST_HEADER include/spdk/tree.h 00:05:52.321 TEST_HEADER include/spdk/ublk.h 00:05:52.321 TEST_HEADER include/spdk/util.h 00:05:52.321 TEST_HEADER include/spdk/uuid.h 00:05:52.321 TEST_HEADER include/spdk/version.h 00:05:52.321 CC test/env/mem_callbacks/mem_callbacks.o 00:05:52.321 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:52.321 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:52.321 TEST_HEADER include/spdk/vhost.h 00:05:52.321 TEST_HEADER include/spdk/vmd.h 00:05:52.321 TEST_HEADER include/spdk/xor.h 00:05:52.321 TEST_HEADER include/spdk/zipf.h 00:05:52.321 CXX test/cpp_headers/accel.o 00:05:52.321 LINK rpc_client_test 00:05:52.321 LINK interrupt_tgt 00:05:52.321 LINK zipf 00:05:52.321 LINK poller_perf 00:05:52.588 LINK ioat_perf 00:05:52.589 LINK bdev_svc 00:05:52.589 CXX test/cpp_headers/accel_module.o 00:05:52.589 CXX test/cpp_headers/assert.o 00:05:52.589 CXX test/cpp_headers/barrier.o 00:05:52.589 LINK spdk_trace 00:05:52.589 CXX test/cpp_headers/base64.o 00:05:52.589 CXX test/cpp_headers/bdev.o 00:05:52.589 CC examples/ioat/verify/verify.o 00:05:52.855 LINK test_dma 00:05:52.855 CXX test/cpp_headers/bdev_module.o 00:05:52.855 CC test/app/histogram_perf/histogram_perf.o 00:05:52.855 CC test/env/vtophys/vtophys.o 00:05:52.855 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:52.855 CC test/env/memory/memory_ut.o 00:05:52.855 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:52.855 CC app/trace_record/trace_record.o 00:05:52.855 LINK verify 00:05:53.126 LINK mem_callbacks 00:05:53.126 LINK vtophys 00:05:53.126 LINK histogram_perf 00:05:53.126 CXX test/cpp_headers/bdev_zone.o 00:05:53.126 LINK env_dpdk_post_init 00:05:53.398 LINK spdk_trace_record 00:05:53.398 CC test/event/event_perf/event_perf.o 00:05:53.398 CXX test/cpp_headers/bit_array.o 00:05:53.398 CC examples/sock/hello_world/hello_sock.o 00:05:53.398 CC examples/thread/thread/thread_ex.o 00:05:53.398 CC app/nvmf_tgt/nvmf_main.o 00:05:53.398 CC app/iscsi_tgt/iscsi_tgt.o 00:05:53.398 CXX test/cpp_headers/bit_pool.o 00:05:53.398 LINK event_perf 00:05:53.398 LINK nvme_fuzz 00:05:53.398 CC app/spdk_tgt/spdk_tgt.o 00:05:53.657 LINK nvmf_tgt 00:05:53.657 CC test/event/reactor/reactor.o 00:05:53.657 CXX test/cpp_headers/blob_bdev.o 00:05:53.657 LINK iscsi_tgt 00:05:53.657 LINK hello_sock 00:05:53.657 LINK thread 00:05:53.657 LINK spdk_tgt 00:05:53.657 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:53.915 LINK reactor 00:05:53.915 CC test/nvme/aer/aer.o 00:05:53.915 CXX test/cpp_headers/blobfs_bdev.o 00:05:54.174 CC app/spdk_lspci/spdk_lspci.o 00:05:54.174 CC test/accel/dif/dif.o 00:05:54.174 CC test/event/reactor_perf/reactor_perf.o 00:05:54.174 CXX test/cpp_headers/blobfs.o 00:05:54.174 CC test/blobfs/mkfs/mkfs.o 00:05:54.174 CC examples/vmd/lsvmd/lsvmd.o 00:05:54.174 LINK aer 00:05:54.174 CC test/lvol/esnap/esnap.o 00:05:54.174 LINK spdk_lspci 00:05:54.174 LINK reactor_perf 00:05:54.433 CXX test/cpp_headers/blob.o 00:05:54.433 LINK lsvmd 00:05:54.433 LINK memory_ut 00:05:54.433 LINK mkfs 00:05:54.433 CC test/nvme/reset/reset.o 00:05:54.433 CXX test/cpp_headers/conf.o 00:05:54.433 CC app/spdk_nvme_perf/perf.o 00:05:54.433 CC test/event/app_repeat/app_repeat.o 00:05:54.691 CC examples/vmd/led/led.o 00:05:54.691 CC test/env/pci/pci_ut.o 00:05:54.691 CXX test/cpp_headers/config.o 00:05:54.691 CC app/spdk_nvme_identify/identify.o 00:05:54.691 LINK app_repeat 00:05:54.691 CXX test/cpp_headers/cpuset.o 00:05:54.691 LINK reset 00:05:54.950 LINK led 00:05:54.950 CXX test/cpp_headers/crc16.o 00:05:54.950 LINK dif 00:05:55.208 CC test/event/scheduler/scheduler.o 00:05:55.208 CC test/nvme/sgl/sgl.o 00:05:55.208 LINK pci_ut 00:05:55.208 CXX test/cpp_headers/crc32.o 00:05:55.208 CC examples/idxd/perf/perf.o 00:05:55.208 CXX test/cpp_headers/crc64.o 00:05:55.466 LINK scheduler 00:05:55.466 LINK sgl 00:05:55.466 CXX test/cpp_headers/dif.o 00:05:55.466 CC test/app/jsoncat/jsoncat.o 00:05:55.466 CC test/app/stub/stub.o 00:05:55.724 LINK spdk_nvme_perf 00:05:55.724 CXX test/cpp_headers/dma.o 00:05:55.724 LINK jsoncat 00:05:55.724 CC test/nvme/e2edp/nvme_dp.o 00:05:55.724 LINK idxd_perf 00:05:55.724 LINK stub 00:05:55.724 CC test/bdev/bdevio/bdevio.o 00:05:55.724 LINK spdk_nvme_identify 00:05:55.983 CXX test/cpp_headers/endian.o 00:05:55.983 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:55.983 CC test/nvme/overhead/overhead.o 00:05:55.983 LINK iscsi_fuzz 00:05:55.983 LINK nvme_dp 00:05:55.983 CXX test/cpp_headers/env_dpdk.o 00:05:55.983 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:56.242 CC app/spdk_nvme_discover/discovery_aer.o 00:05:56.242 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:56.242 CC examples/accel/perf/accel_perf.o 00:05:56.242 CXX test/cpp_headers/env.o 00:05:56.242 LINK bdevio 00:05:56.242 LINK overhead 00:05:56.242 CC test/nvme/err_injection/err_injection.o 00:05:56.500 LINK spdk_nvme_discover 00:05:56.500 CC test/nvme/startup/startup.o 00:05:56.500 CXX test/cpp_headers/event.o 00:05:56.500 LINK hello_fsdev 00:05:56.500 CXX test/cpp_headers/fd_group.o 00:05:56.500 CXX test/cpp_headers/fd.o 00:05:56.500 LINK err_injection 00:05:56.500 LINK vhost_fuzz 00:05:56.759 LINK startup 00:05:56.759 CC app/spdk_top/spdk_top.o 00:05:56.759 CC test/nvme/reserve/reserve.o 00:05:56.759 CXX test/cpp_headers/file.o 00:05:56.759 CXX test/cpp_headers/fsdev.o 00:05:56.759 CC test/nvme/simple_copy/simple_copy.o 00:05:56.759 CC test/nvme/connect_stress/connect_stress.o 00:05:56.759 LINK accel_perf 00:05:57.017 CC test/nvme/boot_partition/boot_partition.o 00:05:57.017 CC test/nvme/compliance/nvme_compliance.o 00:05:57.017 LINK reserve 00:05:57.017 CXX test/cpp_headers/fsdev_module.o 00:05:57.017 LINK boot_partition 00:05:57.017 LINK connect_stress 00:05:57.017 CC app/vhost/vhost.o 00:05:57.276 LINK simple_copy 00:05:57.276 CXX test/cpp_headers/ftl.o 00:05:57.276 CC examples/blob/hello_world/hello_blob.o 00:05:57.276 CC examples/blob/cli/blobcli.o 00:05:57.276 LINK nvme_compliance 00:05:57.276 LINK vhost 00:05:57.276 CXX test/cpp_headers/fuse_dispatcher.o 00:05:57.276 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:57.534 CC test/nvme/fused_ordering/fused_ordering.o 00:05:57.534 CXX test/cpp_headers/gpt_spec.o 00:05:57.534 LINK hello_blob 00:05:57.534 CC test/nvme/fdp/fdp.o 00:05:57.534 LINK doorbell_aers 00:05:57.534 CC test/nvme/cuse/cuse.o 00:05:57.793 LINK fused_ordering 00:05:57.793 CXX test/cpp_headers/hexlify.o 00:05:57.793 CC examples/nvme/hello_world/hello_world.o 00:05:57.793 LINK spdk_top 00:05:58.053 CC examples/nvme/reconnect/reconnect.o 00:05:58.053 LINK blobcli 00:05:58.053 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:58.053 CXX test/cpp_headers/histogram_data.o 00:05:58.053 CC examples/bdev/hello_world/hello_bdev.o 00:05:58.053 LINK fdp 00:05:58.053 LINK hello_world 00:05:58.053 CC app/spdk_dd/spdk_dd.o 00:05:58.312 CXX test/cpp_headers/idxd.o 00:05:58.312 CXX test/cpp_headers/idxd_spec.o 00:05:58.312 CXX test/cpp_headers/init.o 00:05:58.312 LINK hello_bdev 00:05:58.312 LINK reconnect 00:05:58.312 CXX test/cpp_headers/ioat.o 00:05:58.312 CC examples/nvme/arbitration/arbitration.o 00:05:58.571 CXX test/cpp_headers/ioat_spec.o 00:05:58.571 CC examples/nvme/hotplug/hotplug.o 00:05:58.571 CXX test/cpp_headers/iscsi_spec.o 00:05:58.571 LINK nvme_manage 00:05:58.571 LINK spdk_dd 00:05:58.571 CC examples/bdev/bdevperf/bdevperf.o 00:05:58.831 CXX test/cpp_headers/json.o 00:05:58.831 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:58.831 CXX test/cpp_headers/jsonrpc.o 00:05:58.831 CXX test/cpp_headers/keyring.o 00:05:58.831 LINK hotplug 00:05:58.831 CXX test/cpp_headers/keyring_module.o 00:05:58.831 CXX test/cpp_headers/likely.o 00:05:58.831 LINK arbitration 00:05:59.089 LINK cmb_copy 00:05:59.089 CC app/fio/nvme/fio_plugin.o 00:05:59.089 CC app/fio/bdev/fio_plugin.o 00:05:59.089 CXX test/cpp_headers/log.o 00:05:59.089 CC examples/nvme/abort/abort.o 00:05:59.089 CXX test/cpp_headers/lvol.o 00:05:59.089 CXX test/cpp_headers/md5.o 00:05:59.089 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:59.347 CXX test/cpp_headers/memory.o 00:05:59.347 CXX test/cpp_headers/mmio.o 00:05:59.347 LINK pmr_persistence 00:05:59.347 CXX test/cpp_headers/nbd.o 00:05:59.605 LINK cuse 00:05:59.605 CXX test/cpp_headers/net.o 00:05:59.605 CXX test/cpp_headers/notify.o 00:05:59.605 CXX test/cpp_headers/nvme.o 00:05:59.605 CXX test/cpp_headers/nvme_intel.o 00:05:59.605 LINK abort 00:05:59.605 CXX test/cpp_headers/nvme_ocssd.o 00:05:59.605 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:59.865 CXX test/cpp_headers/nvme_spec.o 00:05:59.865 LINK bdevperf 00:05:59.865 LINK spdk_nvme 00:05:59.865 LINK spdk_bdev 00:05:59.865 CXX test/cpp_headers/nvme_zns.o 00:05:59.865 CXX test/cpp_headers/nvmf_cmd.o 00:05:59.865 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:59.865 CXX test/cpp_headers/nvmf.o 00:05:59.865 CXX test/cpp_headers/nvmf_spec.o 00:05:59.865 CXX test/cpp_headers/nvmf_transport.o 00:05:59.865 CXX test/cpp_headers/opal.o 00:05:59.865 CXX test/cpp_headers/opal_spec.o 00:06:00.124 CXX test/cpp_headers/pci_ids.o 00:06:00.124 CXX test/cpp_headers/pipe.o 00:06:00.124 CXX test/cpp_headers/queue.o 00:06:00.124 CXX test/cpp_headers/reduce.o 00:06:00.124 CXX test/cpp_headers/rpc.o 00:06:00.124 CXX test/cpp_headers/scheduler.o 00:06:00.124 CXX test/cpp_headers/scsi.o 00:06:00.124 CXX test/cpp_headers/scsi_spec.o 00:06:00.124 CXX test/cpp_headers/sock.o 00:06:00.383 CC examples/nvmf/nvmf/nvmf.o 00:06:00.383 CXX test/cpp_headers/stdinc.o 00:06:00.383 CXX test/cpp_headers/string.o 00:06:00.383 CXX test/cpp_headers/thread.o 00:06:00.383 CXX test/cpp_headers/trace.o 00:06:00.383 CXX test/cpp_headers/trace_parser.o 00:06:00.383 CXX test/cpp_headers/tree.o 00:06:00.383 CXX test/cpp_headers/ublk.o 00:06:00.383 CXX test/cpp_headers/util.o 00:06:00.383 CXX test/cpp_headers/uuid.o 00:06:00.383 CXX test/cpp_headers/version.o 00:06:00.383 CXX test/cpp_headers/vfio_user_pci.o 00:06:00.383 CXX test/cpp_headers/vfio_user_spec.o 00:06:00.643 CXX test/cpp_headers/vhost.o 00:06:00.643 CXX test/cpp_headers/vmd.o 00:06:00.643 CXX test/cpp_headers/xor.o 00:06:00.643 CXX test/cpp_headers/zipf.o 00:06:00.643 LINK nvmf 00:06:02.546 LINK esnap 00:06:02.804 00:06:02.804 real 1m45.235s 00:06:02.804 user 9m25.119s 00:06:02.804 sys 1m51.324s 00:06:02.804 08:37:59 make -- common/autotest_common.sh@1127 -- $ xtrace_disable 00:06:02.804 08:37:59 make -- common/autotest_common.sh@10 -- $ set +x 00:06:02.805 ************************************ 00:06:02.805 END TEST make 00:06:02.805 ************************************ 00:06:02.805 08:37:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:02.805 08:37:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:02.805 08:37:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:02.805 08:37:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.805 08:37:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:02.805 08:37:59 -- pm/common@44 -- $ pid=5246 00:06:02.805 08:37:59 -- pm/common@50 -- $ kill -TERM 5246 00:06:02.805 08:37:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.805 08:37:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:02.805 08:37:59 -- pm/common@44 -- $ pid=5248 00:06:02.805 08:37:59 -- pm/common@50 -- $ kill -TERM 5248 00:06:02.805 08:37:59 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:02.805 08:37:59 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:02.805 08:37:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:02.805 08:37:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:02.805 08:37:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:03.063 08:37:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:03.063 08:37:59 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.063 08:37:59 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.063 08:37:59 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.063 08:37:59 -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.063 08:37:59 -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.063 08:37:59 -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.063 08:37:59 -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.063 08:37:59 -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.063 08:37:59 -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.063 08:37:59 -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.063 08:37:59 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.063 08:37:59 -- scripts/common.sh@344 -- # case "$op" in 00:06:03.063 08:37:59 -- scripts/common.sh@345 -- # : 1 00:06:03.063 08:37:59 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.063 08:37:59 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.063 08:37:59 -- scripts/common.sh@365 -- # decimal 1 00:06:03.063 08:37:59 -- scripts/common.sh@353 -- # local d=1 00:06:03.063 08:37:59 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.063 08:37:59 -- scripts/common.sh@355 -- # echo 1 00:06:03.063 08:37:59 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.063 08:37:59 -- scripts/common.sh@366 -- # decimal 2 00:06:03.063 08:37:59 -- scripts/common.sh@353 -- # local d=2 00:06:03.063 08:37:59 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.063 08:37:59 -- scripts/common.sh@355 -- # echo 2 00:06:03.063 08:37:59 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.063 08:37:59 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.063 08:37:59 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.063 08:37:59 -- scripts/common.sh@368 -- # return 0 00:06:03.063 08:37:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.063 08:37:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:03.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.063 --rc genhtml_branch_coverage=1 00:06:03.063 --rc genhtml_function_coverage=1 00:06:03.063 --rc genhtml_legend=1 00:06:03.063 --rc geninfo_all_blocks=1 00:06:03.063 --rc geninfo_unexecuted_blocks=1 00:06:03.063 00:06:03.063 ' 00:06:03.063 08:37:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:03.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.063 --rc genhtml_branch_coverage=1 00:06:03.063 --rc genhtml_function_coverage=1 00:06:03.063 --rc genhtml_legend=1 00:06:03.063 --rc geninfo_all_blocks=1 00:06:03.063 --rc geninfo_unexecuted_blocks=1 00:06:03.063 00:06:03.063 ' 00:06:03.063 08:37:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:03.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.063 --rc genhtml_branch_coverage=1 00:06:03.063 --rc genhtml_function_coverage=1 00:06:03.063 --rc genhtml_legend=1 00:06:03.063 --rc geninfo_all_blocks=1 00:06:03.063 --rc geninfo_unexecuted_blocks=1 00:06:03.063 00:06:03.063 ' 00:06:03.063 08:37:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:03.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.063 --rc genhtml_branch_coverage=1 00:06:03.063 --rc genhtml_function_coverage=1 00:06:03.063 --rc genhtml_legend=1 00:06:03.063 --rc geninfo_all_blocks=1 00:06:03.063 --rc geninfo_unexecuted_blocks=1 00:06:03.063 00:06:03.063 ' 00:06:03.063 08:37:59 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:03.063 08:37:59 -- nvmf/common.sh@7 -- # uname -s 00:06:03.063 08:37:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.063 08:37:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.063 08:37:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.063 08:37:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.063 08:37:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.063 08:37:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.063 08:37:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.063 08:37:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.063 08:37:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.063 08:37:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.063 08:37:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f013caa0-29b3-4b77-8191-05c16480dbd7 00:06:03.063 08:37:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=f013caa0-29b3-4b77-8191-05c16480dbd7 00:06:03.063 08:37:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.063 08:37:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.063 08:37:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.063 08:37:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.063 08:37:59 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.063 08:37:59 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.063 08:37:59 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.063 08:37:59 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.063 08:37:59 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.063 08:37:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.063 08:37:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.063 08:37:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.064 08:37:59 -- paths/export.sh@5 -- # export PATH 00:06:03.064 08:37:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.064 08:37:59 -- nvmf/common.sh@51 -- # : 0 00:06:03.064 08:37:59 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.064 08:37:59 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.064 08:37:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.064 08:37:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.064 08:37:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.064 08:37:59 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.064 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.064 08:37:59 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.064 08:37:59 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.064 08:37:59 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.064 08:37:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:03.064 08:37:59 -- spdk/autotest.sh@32 -- # uname -s 00:06:03.064 08:37:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:03.064 08:37:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:03.064 08:37:59 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:03.064 08:37:59 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:03.064 08:37:59 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:03.064 08:37:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:03.064 08:37:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:03.064 08:37:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:03.064 08:37:59 -- spdk/autotest.sh@48 -- # udevadm_pid=54392 00:06:03.064 08:37:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:03.064 08:37:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:03.064 08:37:59 -- pm/common@17 -- # local monitor 00:06:03.064 08:37:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:03.064 08:37:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:03.064 08:37:59 -- pm/common@21 -- # date +%s 00:06:03.064 08:37:59 -- pm/common@21 -- # date +%s 00:06:03.064 08:37:59 -- pm/common@25 -- # sleep 1 00:06:03.064 08:37:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732696679 00:06:03.064 08:37:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732696679 00:06:03.064 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732696679_collect-cpu-load.pm.log 00:06:03.064 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732696679_collect-vmstat.pm.log 00:06:03.996 08:38:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:03.996 08:38:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:03.996 08:38:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.996 08:38:00 -- common/autotest_common.sh@10 -- # set +x 00:06:03.996 08:38:00 -- spdk/autotest.sh@59 -- # create_test_list 00:06:03.996 08:38:00 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:03.996 08:38:00 -- common/autotest_common.sh@10 -- # set +x 00:06:03.996 fatal: not a git repository (or any parent up to mount point /) 00:06:03.996 Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 00:06:04.253 fatal: not a git repository (or any parent up to mount point /) 00:06:04.253 Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 00:06:04.253 08:38:00 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:04.253 08:38:00 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:04.253 08:38:00 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:04.253 08:38:00 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:04.253 08:38:00 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:04.253 08:38:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:04.253 08:38:00 -- common/autotest_common.sh@1454 -- # uname 00:06:04.253 08:38:00 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:06:04.253 08:38:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:04.253 08:38:00 -- common/autotest_common.sh@1474 -- # uname 00:06:04.253 08:38:00 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:06:04.253 08:38:00 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:04.253 08:38:00 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:04.253 lcov: LCOV version 1.15 00:06:04.253 08:38:00 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:22.337 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:22.337 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:40.433 08:38:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:40.433 08:38:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.433 08:38:35 -- common/autotest_common.sh@10 -- # set +x 00:06:40.433 08:38:35 -- spdk/autotest.sh@78 -- # rm -f 00:06:40.433 08:38:35 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:40.433 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:40.433 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:40.433 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:40.433 08:38:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:40.433 08:38:36 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:06:40.433 08:38:36 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:06:40.433 08:38:36 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:06:40.433 08:38:36 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:06:40.433 08:38:36 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:06:40.433 08:38:36 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:06:40.433 08:38:36 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:40.433 08:38:36 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:06:40.433 08:38:36 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:06:40.433 08:38:36 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n2 00:06:40.433 08:38:36 -- common/autotest_common.sh@1647 -- # local device=nvme0n2 00:06:40.433 08:38:36 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:06:40.433 08:38:36 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:06:40.433 08:38:36 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:06:40.433 08:38:36 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n3 00:06:40.433 08:38:36 -- common/autotest_common.sh@1647 -- # local device=nvme0n3 00:06:40.433 08:38:36 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:06:40.433 08:38:36 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:06:40.433 08:38:36 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:06:40.433 08:38:36 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:06:40.433 08:38:36 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:06:40.433 08:38:36 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:40.433 08:38:36 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:06:40.433 08:38:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:40.433 08:38:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:40.433 08:38:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:40.433 08:38:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:40.433 08:38:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:40.433 08:38:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:40.433 No valid GPT data, bailing 00:06:40.433 08:38:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:40.433 08:38:36 -- scripts/common.sh@394 -- # pt= 00:06:40.433 08:38:36 -- scripts/common.sh@395 -- # return 1 00:06:40.433 08:38:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:40.433 1+0 records in 00:06:40.433 1+0 records out 00:06:40.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485717 s, 216 MB/s 00:06:40.433 08:38:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:40.433 08:38:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:40.433 08:38:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:06:40.433 08:38:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:06:40.433 08:38:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:06:40.433 No valid GPT data, bailing 00:06:40.433 08:38:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:06:40.433 08:38:36 -- scripts/common.sh@394 -- # pt= 00:06:40.433 08:38:36 -- scripts/common.sh@395 -- # return 1 00:06:40.433 08:38:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:06:40.433 1+0 records in 00:06:40.433 1+0 records out 00:06:40.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434888 s, 241 MB/s 00:06:40.433 08:38:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:40.433 08:38:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:40.433 08:38:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:06:40.433 08:38:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:06:40.433 08:38:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:06:40.433 No valid GPT data, bailing 00:06:40.433 08:38:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:06:40.433 08:38:36 -- scripts/common.sh@394 -- # pt= 00:06:40.433 08:38:36 -- scripts/common.sh@395 -- # return 1 00:06:40.433 08:38:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:06:40.433 1+0 records in 00:06:40.433 1+0 records out 00:06:40.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00398164 s, 263 MB/s 00:06:40.433 08:38:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:40.433 08:38:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:40.433 08:38:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:40.433 08:38:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:40.433 08:38:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:40.433 No valid GPT data, bailing 00:06:40.433 08:38:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:40.433 08:38:36 -- scripts/common.sh@394 -- # pt= 00:06:40.433 08:38:36 -- scripts/common.sh@395 -- # return 1 00:06:40.433 08:38:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:40.433 1+0 records in 00:06:40.433 1+0 records out 00:06:40.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412038 s, 254 MB/s 00:06:40.433 08:38:36 -- spdk/autotest.sh@105 -- # sync 00:06:40.433 08:38:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:40.433 08:38:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:40.433 08:38:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:42.335 08:38:38 -- spdk/autotest.sh@111 -- # uname -s 00:06:42.335 08:38:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:42.335 08:38:38 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:42.335 08:38:38 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:42.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:42.592 Hugepages 00:06:42.592 node hugesize free / total 00:06:42.592 node0 1048576kB 0 / 0 00:06:42.592 node0 2048kB 0 / 0 00:06:42.592 00:06:42.592 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:42.851 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:42.851 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:42.851 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:06:42.851 08:38:39 -- spdk/autotest.sh@117 -- # uname -s 00:06:42.851 08:38:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:42.851 08:38:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:42.851 08:38:39 -- common/autotest_common.sh@1513 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:43.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.734 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.734 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.734 08:38:40 -- common/autotest_common.sh@1514 -- # sleep 1 00:06:45.110 08:38:41 -- common/autotest_common.sh@1515 -- # bdfs=() 00:06:45.110 08:38:41 -- common/autotest_common.sh@1515 -- # local bdfs 00:06:45.110 08:38:41 -- common/autotest_common.sh@1517 -- # bdfs=($(get_nvme_bdfs)) 00:06:45.110 08:38:41 -- common/autotest_common.sh@1517 -- # get_nvme_bdfs 00:06:45.110 08:38:41 -- common/autotest_common.sh@1495 -- # bdfs=() 00:06:45.110 08:38:41 -- common/autotest_common.sh@1495 -- # local bdfs 00:06:45.110 08:38:41 -- common/autotest_common.sh@1496 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:45.110 08:38:41 -- common/autotest_common.sh@1496 -- # jq -r '.config[].params.traddr' 00:06:45.110 08:38:41 -- common/autotest_common.sh@1496 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:45.110 08:38:41 -- common/autotest_common.sh@1497 -- # (( 2 == 0 )) 00:06:45.110 08:38:41 -- common/autotest_common.sh@1501 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:45.110 08:38:41 -- common/autotest_common.sh@1519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:45.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:45.110 Waiting for block devices as requested 00:06:45.369 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:45.369 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:45.369 08:38:42 -- common/autotest_common.sh@1521 -- # for bdf in "${bdfs[@]}" 00:06:45.369 08:38:42 -- common/autotest_common.sh@1522 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:45.369 08:38:42 -- common/autotest_common.sh@1484 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:45.369 08:38:42 -- common/autotest_common.sh@1484 -- # grep 0000:00:10.0/nvme/nvme 00:06:45.369 08:38:42 -- common/autotest_common.sh@1484 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:45.369 08:38:42 -- common/autotest_common.sh@1485 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:45.369 08:38:42 -- common/autotest_common.sh@1489 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:45.369 08:38:42 -- common/autotest_common.sh@1489 -- # printf '%s\n' nvme1 00:06:45.369 08:38:42 -- common/autotest_common.sh@1522 -- # nvme_ctrlr=/dev/nvme1 00:06:45.369 08:38:42 -- common/autotest_common.sh@1523 -- # [[ -z /dev/nvme1 ]] 00:06:45.369 08:38:42 -- common/autotest_common.sh@1528 -- # nvme id-ctrl /dev/nvme1 00:06:45.369 08:38:42 -- common/autotest_common.sh@1528 -- # grep oacs 00:06:45.369 08:38:42 -- common/autotest_common.sh@1528 -- # cut -d: -f2 00:06:45.369 08:38:42 -- common/autotest_common.sh@1528 -- # oacs=' 0x12a' 00:06:45.369 08:38:42 -- common/autotest_common.sh@1529 -- # oacs_ns_manage=8 00:06:45.369 08:38:42 -- common/autotest_common.sh@1531 -- # [[ 8 -ne 0 ]] 00:06:45.369 08:38:42 -- common/autotest_common.sh@1537 -- # nvme id-ctrl /dev/nvme1 00:06:45.369 08:38:42 -- common/autotest_common.sh@1537 -- # grep unvmcap 00:06:45.369 08:38:42 -- common/autotest_common.sh@1537 -- # cut -d: -f2 00:06:45.628 08:38:42 -- common/autotest_common.sh@1537 -- # unvmcap=' 0' 00:06:45.628 08:38:42 -- common/autotest_common.sh@1538 -- # [[ 0 -eq 0 ]] 00:06:45.628 08:38:42 -- common/autotest_common.sh@1540 -- # continue 00:06:45.628 08:38:42 -- common/autotest_common.sh@1521 -- # for bdf in "${bdfs[@]}" 00:06:45.628 08:38:42 -- common/autotest_common.sh@1522 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:45.628 08:38:42 -- common/autotest_common.sh@1484 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:45.628 08:38:42 -- common/autotest_common.sh@1484 -- # grep 0000:00:11.0/nvme/nvme 00:06:45.628 08:38:42 -- common/autotest_common.sh@1484 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:45.628 08:38:42 -- common/autotest_common.sh@1485 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:45.628 08:38:42 -- common/autotest_common.sh@1489 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:45.628 08:38:42 -- common/autotest_common.sh@1489 -- # printf '%s\n' nvme0 00:06:45.628 08:38:42 -- common/autotest_common.sh@1522 -- # nvme_ctrlr=/dev/nvme0 00:06:45.628 08:38:42 -- common/autotest_common.sh@1523 -- # [[ -z /dev/nvme0 ]] 00:06:45.628 08:38:42 -- common/autotest_common.sh@1528 -- # nvme id-ctrl /dev/nvme0 00:06:45.628 08:38:42 -- common/autotest_common.sh@1528 -- # grep oacs 00:06:45.628 08:38:42 -- common/autotest_common.sh@1528 -- # cut -d: -f2 00:06:45.628 08:38:42 -- common/autotest_common.sh@1528 -- # oacs=' 0x12a' 00:06:45.628 08:38:42 -- common/autotest_common.sh@1529 -- # oacs_ns_manage=8 00:06:45.628 08:38:42 -- common/autotest_common.sh@1531 -- # [[ 8 -ne 0 ]] 00:06:45.628 08:38:42 -- common/autotest_common.sh@1537 -- # grep unvmcap 00:06:45.628 08:38:42 -- common/autotest_common.sh@1537 -- # nvme id-ctrl /dev/nvme0 00:06:45.628 08:38:42 -- common/autotest_common.sh@1537 -- # cut -d: -f2 00:06:45.628 08:38:42 -- common/autotest_common.sh@1537 -- # unvmcap=' 0' 00:06:45.628 08:38:42 -- common/autotest_common.sh@1538 -- # [[ 0 -eq 0 ]] 00:06:45.628 08:38:42 -- common/autotest_common.sh@1540 -- # continue 00:06:45.628 08:38:42 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:45.628 08:38:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.628 08:38:42 -- common/autotest_common.sh@10 -- # set +x 00:06:45.628 08:38:42 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:45.628 08:38:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.628 08:38:42 -- common/autotest_common.sh@10 -- # set +x 00:06:45.628 08:38:42 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:46.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:46.196 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:46.455 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:46.455 08:38:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:46.455 08:38:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.455 08:38:43 -- common/autotest_common.sh@10 -- # set +x 00:06:46.455 08:38:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:46.455 08:38:43 -- common/autotest_common.sh@1575 -- # mapfile -t bdfs 00:06:46.455 08:38:43 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs_by_id 0x0a54 00:06:46.455 08:38:43 -- common/autotest_common.sh@1560 -- # bdfs=() 00:06:46.455 08:38:43 -- common/autotest_common.sh@1560 -- # _bdfs=() 00:06:46.455 08:38:43 -- common/autotest_common.sh@1560 -- # local bdfs _bdfs 00:06:46.455 08:38:43 -- common/autotest_common.sh@1561 -- # _bdfs=($(get_nvme_bdfs)) 00:06:46.455 08:38:43 -- common/autotest_common.sh@1561 -- # get_nvme_bdfs 00:06:46.455 08:38:43 -- common/autotest_common.sh@1495 -- # bdfs=() 00:06:46.455 08:38:43 -- common/autotest_common.sh@1495 -- # local bdfs 00:06:46.455 08:38:43 -- common/autotest_common.sh@1496 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:46.455 08:38:43 -- common/autotest_common.sh@1496 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:46.455 08:38:43 -- common/autotest_common.sh@1496 -- # jq -r '.config[].params.traddr' 00:06:46.455 08:38:43 -- common/autotest_common.sh@1497 -- # (( 2 == 0 )) 00:06:46.455 08:38:43 -- common/autotest_common.sh@1501 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:46.455 08:38:43 -- common/autotest_common.sh@1562 -- # for bdf in "${_bdfs[@]}" 00:06:46.455 08:38:43 -- common/autotest_common.sh@1563 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:46.455 08:38:43 -- common/autotest_common.sh@1563 -- # device=0x0010 00:06:46.455 08:38:43 -- common/autotest_common.sh@1564 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:46.455 08:38:43 -- common/autotest_common.sh@1562 -- # for bdf in "${_bdfs[@]}" 00:06:46.455 08:38:43 -- common/autotest_common.sh@1563 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:46.455 08:38:43 -- common/autotest_common.sh@1563 -- # device=0x0010 00:06:46.455 08:38:43 -- common/autotest_common.sh@1564 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:46.455 08:38:43 -- common/autotest_common.sh@1569 -- # (( 0 > 0 )) 00:06:46.455 08:38:43 -- common/autotest_common.sh@1569 -- # return 0 00:06:46.455 08:38:43 -- common/autotest_common.sh@1576 -- # [[ -z '' ]] 00:06:46.455 08:38:43 -- common/autotest_common.sh@1577 -- # return 0 00:06:46.455 08:38:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:46.455 08:38:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:46.455 08:38:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:46.455 08:38:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:46.455 08:38:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:46.455 08:38:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.455 08:38:43 -- common/autotest_common.sh@10 -- # set +x 00:06:46.455 08:38:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:46.455 08:38:43 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:46.455 08:38:43 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:06:46.455 08:38:43 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:46.455 08:38:43 -- common/autotest_common.sh@10 -- # set +x 00:06:46.455 ************************************ 00:06:46.455 START TEST env 00:06:46.455 ************************************ 00:06:46.455 08:38:43 env -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:46.715 * Looking for test storage... 00:06:46.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:46.715 08:38:43 env -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:46.715 08:38:43 env -- common/autotest_common.sh@1690 -- # lcov --version 00:06:46.715 08:38:43 env -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:46.715 08:38:43 env -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:46.715 08:38:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.715 08:38:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.715 08:38:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.715 08:38:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.715 08:38:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.715 08:38:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.715 08:38:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.715 08:38:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.715 08:38:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.715 08:38:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.715 08:38:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.715 08:38:43 env -- scripts/common.sh@344 -- # case "$op" in 00:06:46.715 08:38:43 env -- scripts/common.sh@345 -- # : 1 00:06:46.715 08:38:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.715 08:38:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.715 08:38:43 env -- scripts/common.sh@365 -- # decimal 1 00:06:46.715 08:38:43 env -- scripts/common.sh@353 -- # local d=1 00:06:46.715 08:38:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.715 08:38:43 env -- scripts/common.sh@355 -- # echo 1 00:06:46.715 08:38:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.715 08:38:43 env -- scripts/common.sh@366 -- # decimal 2 00:06:46.715 08:38:43 env -- scripts/common.sh@353 -- # local d=2 00:06:46.715 08:38:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.715 08:38:43 env -- scripts/common.sh@355 -- # echo 2 00:06:46.715 08:38:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.715 08:38:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.715 08:38:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.715 08:38:43 env -- scripts/common.sh@368 -- # return 0 00:06:46.715 08:38:43 env -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.715 08:38:43 env -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:46.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.715 --rc genhtml_branch_coverage=1 00:06:46.715 --rc genhtml_function_coverage=1 00:06:46.715 --rc genhtml_legend=1 00:06:46.715 --rc geninfo_all_blocks=1 00:06:46.715 --rc geninfo_unexecuted_blocks=1 00:06:46.715 00:06:46.715 ' 00:06:46.715 08:38:43 env -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:46.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.715 --rc genhtml_branch_coverage=1 00:06:46.715 --rc genhtml_function_coverage=1 00:06:46.715 --rc genhtml_legend=1 00:06:46.715 --rc geninfo_all_blocks=1 00:06:46.715 --rc geninfo_unexecuted_blocks=1 00:06:46.715 00:06:46.715 ' 00:06:46.715 08:38:43 env -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:46.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.715 --rc genhtml_branch_coverage=1 00:06:46.715 --rc genhtml_function_coverage=1 00:06:46.715 --rc genhtml_legend=1 00:06:46.715 --rc geninfo_all_blocks=1 00:06:46.715 --rc geninfo_unexecuted_blocks=1 00:06:46.715 00:06:46.715 ' 00:06:46.715 08:38:43 env -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:46.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.715 --rc genhtml_branch_coverage=1 00:06:46.715 --rc genhtml_function_coverage=1 00:06:46.715 --rc genhtml_legend=1 00:06:46.715 --rc geninfo_all_blocks=1 00:06:46.715 --rc geninfo_unexecuted_blocks=1 00:06:46.715 00:06:46.715 ' 00:06:46.715 08:38:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:46.715 08:38:43 env -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:06:46.715 08:38:43 env -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:46.715 08:38:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:46.715 ************************************ 00:06:46.715 START TEST env_memory 00:06:46.715 ************************************ 00:06:46.715 08:38:43 env.env_memory -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:46.715 00:06:46.715 00:06:46.715 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.715 http://cunit.sourceforge.net/ 00:06:46.715 00:06:46.715 00:06:46.715 Suite: memory 00:06:46.974 Test: alloc and free memory map ...[2024-11-27 08:38:43.482625] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:46.974 passed 00:06:46.974 Test: mem map translation ...[2024-11-27 08:38:43.530696] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:46.974 [2024-11-27 08:38:43.530812] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:46.974 [2024-11-27 08:38:43.530892] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:46.974 [2024-11-27 08:38:43.530921] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:46.974 passed 00:06:46.974 Test: mem map registration ...[2024-11-27 08:38:43.610886] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:46.974 [2024-11-27 08:38:43.610995] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:46.974 passed 00:06:46.974 Test: mem map adjacent registrations ...passed 00:06:46.974 00:06:46.974 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.974 suites 1 1 n/a 0 0 00:06:46.974 tests 4 4 4 0 0 00:06:46.974 asserts 152 152 152 0 n/a 00:06:46.974 00:06:46.974 Elapsed time = 0.273 seconds 00:06:47.234 00:06:47.234 real 0m0.313s 00:06:47.234 user 0m0.280s 00:06:47.234 sys 0m0.027s 00:06:47.234 08:38:43 env.env_memory -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:47.234 08:38:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:47.234 ************************************ 00:06:47.234 END TEST env_memory 00:06:47.234 ************************************ 00:06:47.234 08:38:43 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:47.234 08:38:43 env -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:06:47.234 08:38:43 env -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:47.234 08:38:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:47.234 ************************************ 00:06:47.234 START TEST env_vtophys 00:06:47.234 ************************************ 00:06:47.234 08:38:43 env.env_vtophys -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:47.234 EAL: lib.eal log level changed from notice to debug 00:06:47.234 EAL: Detected lcore 0 as core 0 on socket 0 00:06:47.234 EAL: Detected lcore 1 as core 0 on socket 0 00:06:47.234 EAL: Detected lcore 2 as core 0 on socket 0 00:06:47.234 EAL: Detected lcore 3 as core 0 on socket 0 00:06:47.234 EAL: Detected lcore 4 as core 0 on socket 0 00:06:47.234 EAL: Detected lcore 5 as core 0 on socket 0 00:06:47.234 EAL: Detected lcore 6 as core 0 on socket 0 00:06:47.234 EAL: Detected lcore 7 as core 0 on socket 0 00:06:47.234 EAL: Detected lcore 8 as core 0 on socket 0 00:06:47.234 EAL: Detected lcore 9 as core 0 on socket 0 00:06:47.234 EAL: Maximum logical cores by configuration: 128 00:06:47.234 EAL: Detected CPU lcores: 10 00:06:47.234 EAL: Detected NUMA nodes: 1 00:06:47.234 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:47.234 EAL: Detected shared linkage of DPDK 00:06:47.234 EAL: No shared files mode enabled, IPC will be disabled 00:06:47.234 EAL: Selected IOVA mode 'PA' 00:06:47.234 EAL: Probing VFIO support... 00:06:47.234 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:47.234 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:47.234 EAL: Ask a virtual area of 0x2e000 bytes 00:06:47.234 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:47.234 EAL: Setting up physically contiguous memory... 00:06:47.234 EAL: Setting maximum number of open files to 524288 00:06:47.234 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:47.234 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:47.234 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.234 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:47.234 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:47.234 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.234 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:47.234 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:47.234 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.234 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:47.234 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:47.234 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.234 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:47.234 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:47.234 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.234 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:47.234 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:47.234 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.234 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:47.234 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:47.234 EAL: Ask a virtual area of 0x61000 bytes 00:06:47.234 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:47.234 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:47.234 EAL: Ask a virtual area of 0x400000000 bytes 00:06:47.234 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:47.234 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:47.234 EAL: Hugepages will be freed exactly as allocated. 00:06:47.234 EAL: No shared files mode enabled, IPC is disabled 00:06:47.234 EAL: No shared files mode enabled, IPC is disabled 00:06:47.493 EAL: TSC frequency is ~2200000 KHz 00:06:47.493 EAL: Main lcore 0 is ready (tid=7f729a0cfa40;cpuset=[0]) 00:06:47.493 EAL: Trying to obtain current memory policy. 00:06:47.493 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:47.493 EAL: Restoring previous memory policy: 0 00:06:47.493 EAL: request: mp_malloc_sync 00:06:47.493 EAL: No shared files mode enabled, IPC is disabled 00:06:47.493 EAL: Heap on socket 0 was expanded by 2MB 00:06:47.493 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:47.493 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:47.493 EAL: Mem event callback 'spdk:(nil)' registered 00:06:47.493 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:47.493 00:06:47.493 00:06:47.493 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.493 http://cunit.sourceforge.net/ 00:06:47.493 00:06:47.493 00:06:47.493 Suite: components_suite 00:06:48.091 Test: vtophys_malloc_test ...passed 00:06:48.091 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:48.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:48.091 EAL: Restoring previous memory policy: 4 00:06:48.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.091 EAL: request: mp_malloc_sync 00:06:48.091 EAL: No shared files mode enabled, IPC is disabled 00:06:48.091 EAL: Heap on socket 0 was expanded by 4MB 00:06:48.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.091 EAL: request: mp_malloc_sync 00:06:48.091 EAL: No shared files mode enabled, IPC is disabled 00:06:48.091 EAL: Heap on socket 0 was shrunk by 4MB 00:06:48.091 EAL: Trying to obtain current memory policy. 00:06:48.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:48.091 EAL: Restoring previous memory policy: 4 00:06:48.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.091 EAL: request: mp_malloc_sync 00:06:48.091 EAL: No shared files mode enabled, IPC is disabled 00:06:48.091 EAL: Heap on socket 0 was expanded by 6MB 00:06:48.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.091 EAL: request: mp_malloc_sync 00:06:48.091 EAL: No shared files mode enabled, IPC is disabled 00:06:48.091 EAL: Heap on socket 0 was shrunk by 6MB 00:06:48.091 EAL: Trying to obtain current memory policy. 00:06:48.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:48.091 EAL: Restoring previous memory policy: 4 00:06:48.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.091 EAL: request: mp_malloc_sync 00:06:48.091 EAL: No shared files mode enabled, IPC is disabled 00:06:48.091 EAL: Heap on socket 0 was expanded by 10MB 00:06:48.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.091 EAL: request: mp_malloc_sync 00:06:48.091 EAL: No shared files mode enabled, IPC is disabled 00:06:48.091 EAL: Heap on socket 0 was shrunk by 10MB 00:06:48.091 EAL: Trying to obtain current memory policy. 00:06:48.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:48.091 EAL: Restoring previous memory policy: 4 00:06:48.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.091 EAL: request: mp_malloc_sync 00:06:48.091 EAL: No shared files mode enabled, IPC is disabled 00:06:48.091 EAL: Heap on socket 0 was expanded by 18MB 00:06:48.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.091 EAL: request: mp_malloc_sync 00:06:48.091 EAL: No shared files mode enabled, IPC is disabled 00:06:48.091 EAL: Heap on socket 0 was shrunk by 18MB 00:06:48.091 EAL: Trying to obtain current memory policy. 00:06:48.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:48.091 EAL: Restoring previous memory policy: 4 00:06:48.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.091 EAL: request: mp_malloc_sync 00:06:48.091 EAL: No shared files mode enabled, IPC is disabled 00:06:48.091 EAL: Heap on socket 0 was expanded by 34MB 00:06:48.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.091 EAL: request: mp_malloc_sync 00:06:48.091 EAL: No shared files mode enabled, IPC is disabled 00:06:48.091 EAL: Heap on socket 0 was shrunk by 34MB 00:06:48.091 EAL: Trying to obtain current memory policy. 00:06:48.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:48.091 EAL: Restoring previous memory policy: 4 00:06:48.091 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.091 EAL: request: mp_malloc_sync 00:06:48.091 EAL: No shared files mode enabled, IPC is disabled 00:06:48.091 EAL: Heap on socket 0 was expanded by 66MB 00:06:48.383 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.383 EAL: request: mp_malloc_sync 00:06:48.383 EAL: No shared files mode enabled, IPC is disabled 00:06:48.383 EAL: Heap on socket 0 was shrunk by 66MB 00:06:48.383 EAL: Trying to obtain current memory policy. 00:06:48.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:48.383 EAL: Restoring previous memory policy: 4 00:06:48.383 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.383 EAL: request: mp_malloc_sync 00:06:48.383 EAL: No shared files mode enabled, IPC is disabled 00:06:48.383 EAL: Heap on socket 0 was expanded by 130MB 00:06:48.673 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.673 EAL: request: mp_malloc_sync 00:06:48.673 EAL: No shared files mode enabled, IPC is disabled 00:06:48.673 EAL: Heap on socket 0 was shrunk by 130MB 00:06:48.932 EAL: Trying to obtain current memory policy. 00:06:48.932 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:48.932 EAL: Restoring previous memory policy: 4 00:06:48.932 EAL: Calling mem event callback 'spdk:(nil)' 00:06:48.932 EAL: request: mp_malloc_sync 00:06:48.932 EAL: No shared files mode enabled, IPC is disabled 00:06:48.932 EAL: Heap on socket 0 was expanded by 258MB 00:06:49.499 EAL: Calling mem event callback 'spdk:(nil)' 00:06:49.499 EAL: request: mp_malloc_sync 00:06:49.499 EAL: No shared files mode enabled, IPC is disabled 00:06:49.499 EAL: Heap on socket 0 was shrunk by 258MB 00:06:49.757 EAL: Trying to obtain current memory policy. 00:06:49.757 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:50.015 EAL: Restoring previous memory policy: 4 00:06:50.015 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.015 EAL: request: mp_malloc_sync 00:06:50.015 EAL: No shared files mode enabled, IPC is disabled 00:06:50.015 EAL: Heap on socket 0 was expanded by 514MB 00:06:50.952 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.952 EAL: request: mp_malloc_sync 00:06:50.952 EAL: No shared files mode enabled, IPC is disabled 00:06:50.952 EAL: Heap on socket 0 was shrunk by 514MB 00:06:51.889 EAL: Trying to obtain current memory policy. 00:06:51.889 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:51.889 EAL: Restoring previous memory policy: 4 00:06:51.889 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.889 EAL: request: mp_malloc_sync 00:06:51.889 EAL: No shared files mode enabled, IPC is disabled 00:06:51.889 EAL: Heap on socket 0 was expanded by 1026MB 00:06:53.934 EAL: Calling mem event callback 'spdk:(nil)' 00:06:53.934 EAL: request: mp_malloc_sync 00:06:53.934 EAL: No shared files mode enabled, IPC is disabled 00:06:53.934 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:55.311 passed 00:06:55.311 00:06:55.311 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.311 suites 1 1 n/a 0 0 00:06:55.311 tests 2 2 2 0 0 00:06:55.311 asserts 5747 5747 5747 0 n/a 00:06:55.311 00:06:55.311 Elapsed time = 7.752 seconds 00:06:55.311 EAL: Calling mem event callback 'spdk:(nil)' 00:06:55.311 EAL: request: mp_malloc_sync 00:06:55.311 EAL: No shared files mode enabled, IPC is disabled 00:06:55.311 EAL: Heap on socket 0 was shrunk by 2MB 00:06:55.311 EAL: No shared files mode enabled, IPC is disabled 00:06:55.311 EAL: No shared files mode enabled, IPC is disabled 00:06:55.311 EAL: No shared files mode enabled, IPC is disabled 00:06:55.311 00:06:55.311 real 0m8.117s 00:06:55.311 user 0m6.836s 00:06:55.311 sys 0m1.107s 00:06:55.311 08:38:51 env.env_vtophys -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:55.311 08:38:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:55.311 ************************************ 00:06:55.311 END TEST env_vtophys 00:06:55.311 ************************************ 00:06:55.311 08:38:51 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:55.311 08:38:51 env -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:06:55.311 08:38:51 env -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:55.311 08:38:51 env -- common/autotest_common.sh@10 -- # set +x 00:06:55.311 ************************************ 00:06:55.311 START TEST env_pci 00:06:55.311 ************************************ 00:06:55.311 08:38:51 env.env_pci -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:55.311 00:06:55.311 00:06:55.311 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.311 http://cunit.sourceforge.net/ 00:06:55.311 00:06:55.311 00:06:55.311 Suite: pci 00:06:55.311 Test: pci_hook ...[2024-11-27 08:38:52.004804] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56728 has claimed it 00:06:55.311 passed 00:06:55.311 00:06:55.311 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.311 suites 1 1 n/a 0 0 00:06:55.311 tests 1 1 1 0 0 00:06:55.311 asserts 25 25 25 0 n/a 00:06:55.311 00:06:55.311 Elapsed time = 0.008 seconds 00:06:55.311 EAL: Cannot find device (10000:00:01.0) 00:06:55.311 EAL: Failed to attach device on primary process 00:06:55.311 00:06:55.311 real 0m0.088s 00:06:55.311 user 0m0.042s 00:06:55.311 sys 0m0.045s 00:06:55.311 08:38:52 env.env_pci -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:55.311 08:38:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:55.311 ************************************ 00:06:55.311 END TEST env_pci 00:06:55.311 ************************************ 00:06:55.571 08:38:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:55.571 08:38:52 env -- env/env.sh@15 -- # uname 00:06:55.571 08:38:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:55.571 08:38:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:55.571 08:38:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:55.571 08:38:52 env -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:06:55.571 08:38:52 env -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:55.571 08:38:52 env -- common/autotest_common.sh@10 -- # set +x 00:06:55.571 ************************************ 00:06:55.571 START TEST env_dpdk_post_init 00:06:55.571 ************************************ 00:06:55.571 08:38:52 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:55.571 EAL: Detected CPU lcores: 10 00:06:55.571 EAL: Detected NUMA nodes: 1 00:06:55.571 EAL: Detected shared linkage of DPDK 00:06:55.571 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:55.571 EAL: Selected IOVA mode 'PA' 00:06:55.830 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:55.830 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:55.830 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:55.830 Starting DPDK initialization... 00:06:55.830 Starting SPDK post initialization... 00:06:55.830 SPDK NVMe probe 00:06:55.830 Attaching to 0000:00:10.0 00:06:55.830 Attaching to 0000:00:11.0 00:06:55.830 Attached to 0000:00:10.0 00:06:55.830 Attached to 0000:00:11.0 00:06:55.830 Cleaning up... 00:06:55.830 00:06:55.830 real 0m0.325s 00:06:55.830 user 0m0.115s 00:06:55.830 sys 0m0.108s 00:06:55.830 08:38:52 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:55.830 08:38:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:55.830 ************************************ 00:06:55.830 END TEST env_dpdk_post_init 00:06:55.830 ************************************ 00:06:55.830 08:38:52 env -- env/env.sh@26 -- # uname 00:06:55.830 08:38:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:55.830 08:38:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:55.830 08:38:52 env -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:06:55.830 08:38:52 env -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:55.830 08:38:52 env -- common/autotest_common.sh@10 -- # set +x 00:06:55.830 ************************************ 00:06:55.830 START TEST env_mem_callbacks 00:06:55.830 ************************************ 00:06:55.830 08:38:52 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:55.830 EAL: Detected CPU lcores: 10 00:06:55.830 EAL: Detected NUMA nodes: 1 00:06:55.830 EAL: Detected shared linkage of DPDK 00:06:55.830 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:55.830 EAL: Selected IOVA mode 'PA' 00:06:56.089 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:56.089 00:06:56.089 00:06:56.089 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.089 http://cunit.sourceforge.net/ 00:06:56.089 00:06:56.089 00:06:56.089 Suite: memory 00:06:56.089 Test: test ... 00:06:56.089 register 0x200000200000 2097152 00:06:56.089 malloc 3145728 00:06:56.089 register 0x200000400000 4194304 00:06:56.089 buf 0x2000004fffc0 len 3145728 PASSED 00:06:56.089 malloc 64 00:06:56.089 buf 0x2000004ffec0 len 64 PASSED 00:06:56.089 malloc 4194304 00:06:56.089 register 0x200000800000 6291456 00:06:56.089 buf 0x2000009fffc0 len 4194304 PASSED 00:06:56.089 free 0x2000004fffc0 3145728 00:06:56.089 free 0x2000004ffec0 64 00:06:56.089 unregister 0x200000400000 4194304 PASSED 00:06:56.089 free 0x2000009fffc0 4194304 00:06:56.089 unregister 0x200000800000 6291456 PASSED 00:06:56.089 malloc 8388608 00:06:56.089 register 0x200000400000 10485760 00:06:56.089 buf 0x2000005fffc0 len 8388608 PASSED 00:06:56.089 free 0x2000005fffc0 8388608 00:06:56.089 unregister 0x200000400000 10485760 PASSED 00:06:56.089 passed 00:06:56.089 00:06:56.089 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.089 suites 1 1 n/a 0 0 00:06:56.089 tests 1 1 1 0 0 00:06:56.089 asserts 15 15 15 0 n/a 00:06:56.089 00:06:56.089 Elapsed time = 0.074 seconds 00:06:56.089 00:06:56.089 real 0m0.289s 00:06:56.089 user 0m0.098s 00:06:56.089 sys 0m0.087s 00:06:56.089 08:38:52 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:56.089 08:38:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:56.089 ************************************ 00:06:56.089 END TEST env_mem_callbacks 00:06:56.089 ************************************ 00:06:56.089 00:06:56.089 real 0m9.609s 00:06:56.089 user 0m7.573s 00:06:56.089 sys 0m1.637s 00:06:56.089 08:38:52 env -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:56.089 08:38:52 env -- common/autotest_common.sh@10 -- # set +x 00:06:56.089 ************************************ 00:06:56.089 END TEST env 00:06:56.089 ************************************ 00:06:56.349 08:38:52 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:56.349 08:38:52 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:06:56.349 08:38:52 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:56.349 08:38:52 -- common/autotest_common.sh@10 -- # set +x 00:06:56.349 ************************************ 00:06:56.349 START TEST rpc 00:06:56.349 ************************************ 00:06:56.349 08:38:52 rpc -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:56.349 * Looking for test storage... 00:06:56.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:56.349 08:38:52 rpc -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:56.349 08:38:52 rpc -- common/autotest_common.sh@1690 -- # lcov --version 00:06:56.349 08:38:52 rpc -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:56.349 08:38:53 rpc -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:56.349 08:38:53 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.349 08:38:53 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.349 08:38:53 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.349 08:38:53 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.349 08:38:53 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.349 08:38:53 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.349 08:38:53 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.349 08:38:53 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.349 08:38:53 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.349 08:38:53 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.349 08:38:53 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.349 08:38:53 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:56.349 08:38:53 rpc -- scripts/common.sh@345 -- # : 1 00:06:56.349 08:38:53 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.349 08:38:53 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.349 08:38:53 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:56.349 08:38:53 rpc -- scripts/common.sh@353 -- # local d=1 00:06:56.349 08:38:53 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.349 08:38:53 rpc -- scripts/common.sh@355 -- # echo 1 00:06:56.349 08:38:53 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.349 08:38:53 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:56.349 08:38:53 rpc -- scripts/common.sh@353 -- # local d=2 00:06:56.349 08:38:53 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.349 08:38:53 rpc -- scripts/common.sh@355 -- # echo 2 00:06:56.349 08:38:53 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.349 08:38:53 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.349 08:38:53 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.349 08:38:53 rpc -- scripts/common.sh@368 -- # return 0 00:06:56.349 08:38:53 rpc -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.349 08:38:53 rpc -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:56.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.349 --rc genhtml_branch_coverage=1 00:06:56.349 --rc genhtml_function_coverage=1 00:06:56.349 --rc genhtml_legend=1 00:06:56.349 --rc geninfo_all_blocks=1 00:06:56.349 --rc geninfo_unexecuted_blocks=1 00:06:56.349 00:06:56.349 ' 00:06:56.349 08:38:53 rpc -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:56.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.349 --rc genhtml_branch_coverage=1 00:06:56.349 --rc genhtml_function_coverage=1 00:06:56.349 --rc genhtml_legend=1 00:06:56.349 --rc geninfo_all_blocks=1 00:06:56.349 --rc geninfo_unexecuted_blocks=1 00:06:56.349 00:06:56.349 ' 00:06:56.349 08:38:53 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:56.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.349 --rc genhtml_branch_coverage=1 00:06:56.349 --rc genhtml_function_coverage=1 00:06:56.349 --rc genhtml_legend=1 00:06:56.349 --rc geninfo_all_blocks=1 00:06:56.349 --rc geninfo_unexecuted_blocks=1 00:06:56.349 00:06:56.349 ' 00:06:56.349 08:38:53 rpc -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:56.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.349 --rc genhtml_branch_coverage=1 00:06:56.349 --rc genhtml_function_coverage=1 00:06:56.349 --rc genhtml_legend=1 00:06:56.349 --rc geninfo_all_blocks=1 00:06:56.349 --rc geninfo_unexecuted_blocks=1 00:06:56.349 00:06:56.349 ' 00:06:56.349 08:38:53 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56856 00:06:56.349 08:38:53 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:56.349 08:38:53 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.349 08:38:53 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56856 00:06:56.349 08:38:53 rpc -- common/autotest_common.sh@832 -- # '[' -z 56856 ']' 00:06:56.349 08:38:53 rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.349 08:38:53 rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:06:56.350 08:38:53 rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.350 08:38:53 rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:06:56.350 08:38:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.608 [2024-11-27 08:38:53.180557] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:06:56.608 [2024-11-27 08:38:53.180992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56856 ] 00:06:56.866 [2024-11-27 08:38:53.370691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.866 [2024-11-27 08:38:53.537842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:56.866 [2024-11-27 08:38:53.537938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56856' to capture a snapshot of events at runtime. 00:06:56.866 [2024-11-27 08:38:53.537962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:56.866 [2024-11-27 08:38:53.537985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:56.866 [2024-11-27 08:38:53.538002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56856 for offline analysis/debug. 00:06:56.866 [2024-11-27 08:38:53.539666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.801 08:38:54 rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:06:57.801 08:38:54 rpc -- common/autotest_common.sh@865 -- # return 0 00:06:57.801 08:38:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:57.801 08:38:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:57.801 08:38:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:57.801 08:38:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:57.801 08:38:54 rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:06:57.801 08:38:54 rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:57.801 08:38:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.801 ************************************ 00:06:57.802 START TEST rpc_integrity 00:06:57.802 ************************************ 00:06:57.802 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # rpc_integrity 00:06:57.802 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:57.802 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.802 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.802 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.802 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:57.802 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:57.802 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:57.802 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:57.802 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.802 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.802 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.802 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:57.802 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:57.802 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.802 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.802 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.802 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:57.802 { 00:06:57.802 "name": "Malloc0", 00:06:57.802 "aliases": [ 00:06:57.802 "6da21e12-71b3-4fe6-affe-73512c65d3d0" 00:06:57.802 ], 00:06:57.802 "product_name": "Malloc disk", 00:06:57.802 "block_size": 512, 00:06:57.802 "num_blocks": 16384, 00:06:57.802 "uuid": "6da21e12-71b3-4fe6-affe-73512c65d3d0", 00:06:57.802 "assigned_rate_limits": { 00:06:57.802 "rw_ios_per_sec": 0, 00:06:57.802 "rw_mbytes_per_sec": 0, 00:06:57.802 "r_mbytes_per_sec": 0, 00:06:57.802 "w_mbytes_per_sec": 0 00:06:57.802 }, 00:06:57.802 "claimed": false, 00:06:57.802 "zoned": false, 00:06:57.802 "supported_io_types": { 00:06:57.802 "read": true, 00:06:57.802 "write": true, 00:06:57.802 "unmap": true, 00:06:57.802 "flush": true, 00:06:57.802 "reset": true, 00:06:57.802 "nvme_admin": false, 00:06:57.802 "nvme_io": false, 00:06:57.802 "nvme_io_md": false, 00:06:57.802 "write_zeroes": true, 00:06:57.802 "zcopy": true, 00:06:57.802 "get_zone_info": false, 00:06:57.802 "zone_management": false, 00:06:57.802 "zone_append": false, 00:06:57.802 "compare": false, 00:06:57.802 "compare_and_write": false, 00:06:57.802 "abort": true, 00:06:57.802 "seek_hole": false, 00:06:57.802 "seek_data": false, 00:06:57.802 "copy": true, 00:06:57.802 "nvme_iov_md": false 00:06:57.802 }, 00:06:57.802 "memory_domains": [ 00:06:57.802 { 00:06:57.802 "dma_device_id": "system", 00:06:57.802 "dma_device_type": 1 00:06:57.802 }, 00:06:57.802 { 00:06:57.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.802 "dma_device_type": 2 00:06:57.802 } 00:06:57.802 ], 00:06:57.802 "driver_specific": {} 00:06:57.802 } 00:06:57.802 ]' 00:06:57.802 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:58.061 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:58.061 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:58.061 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.061 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:58.061 [2024-11-27 08:38:54.575647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:58.061 [2024-11-27 08:38:54.575951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:58.061 [2024-11-27 08:38:54.576004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:58.061 [2024-11-27 08:38:54.576033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:58.061 [2024-11-27 08:38:54.579086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:58.061 [2024-11-27 08:38:54.579272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:58.061 Passthru0 00:06:58.061 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.061 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:58.061 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.061 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:58.061 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.061 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:58.061 { 00:06:58.061 "name": "Malloc0", 00:06:58.061 "aliases": [ 00:06:58.061 "6da21e12-71b3-4fe6-affe-73512c65d3d0" 00:06:58.061 ], 00:06:58.061 "product_name": "Malloc disk", 00:06:58.061 "block_size": 512, 00:06:58.061 "num_blocks": 16384, 00:06:58.061 "uuid": "6da21e12-71b3-4fe6-affe-73512c65d3d0", 00:06:58.061 "assigned_rate_limits": { 00:06:58.061 "rw_ios_per_sec": 0, 00:06:58.061 "rw_mbytes_per_sec": 0, 00:06:58.061 "r_mbytes_per_sec": 0, 00:06:58.061 "w_mbytes_per_sec": 0 00:06:58.061 }, 00:06:58.061 "claimed": true, 00:06:58.061 "claim_type": "exclusive_write", 00:06:58.061 "zoned": false, 00:06:58.061 "supported_io_types": { 00:06:58.061 "read": true, 00:06:58.061 "write": true, 00:06:58.062 "unmap": true, 00:06:58.062 "flush": true, 00:06:58.062 "reset": true, 00:06:58.062 "nvme_admin": false, 00:06:58.062 "nvme_io": false, 00:06:58.062 "nvme_io_md": false, 00:06:58.062 "write_zeroes": true, 00:06:58.062 "zcopy": true, 00:06:58.062 "get_zone_info": false, 00:06:58.062 "zone_management": false, 00:06:58.062 "zone_append": false, 00:06:58.062 "compare": false, 00:06:58.062 "compare_and_write": false, 00:06:58.062 "abort": true, 00:06:58.062 "seek_hole": false, 00:06:58.062 "seek_data": false, 00:06:58.062 "copy": true, 00:06:58.062 "nvme_iov_md": false 00:06:58.062 }, 00:06:58.062 "memory_domains": [ 00:06:58.062 { 00:06:58.062 "dma_device_id": "system", 00:06:58.062 "dma_device_type": 1 00:06:58.062 }, 00:06:58.062 { 00:06:58.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.062 "dma_device_type": 2 00:06:58.062 } 00:06:58.062 ], 00:06:58.062 "driver_specific": {} 00:06:58.062 }, 00:06:58.062 { 00:06:58.062 "name": "Passthru0", 00:06:58.062 "aliases": [ 00:06:58.062 "60685c61-962f-572e-9f95-ce8472904846" 00:06:58.062 ], 00:06:58.062 "product_name": "passthru", 00:06:58.062 "block_size": 512, 00:06:58.062 "num_blocks": 16384, 00:06:58.062 "uuid": "60685c61-962f-572e-9f95-ce8472904846", 00:06:58.062 "assigned_rate_limits": { 00:06:58.062 "rw_ios_per_sec": 0, 00:06:58.062 "rw_mbytes_per_sec": 0, 00:06:58.062 "r_mbytes_per_sec": 0, 00:06:58.062 "w_mbytes_per_sec": 0 00:06:58.062 }, 00:06:58.062 "claimed": false, 00:06:58.062 "zoned": false, 00:06:58.062 "supported_io_types": { 00:06:58.062 "read": true, 00:06:58.062 "write": true, 00:06:58.062 "unmap": true, 00:06:58.062 "flush": true, 00:06:58.062 "reset": true, 00:06:58.062 "nvme_admin": false, 00:06:58.062 "nvme_io": false, 00:06:58.062 "nvme_io_md": false, 00:06:58.062 "write_zeroes": true, 00:06:58.062 "zcopy": true, 00:06:58.062 "get_zone_info": false, 00:06:58.062 "zone_management": false, 00:06:58.062 "zone_append": false, 00:06:58.062 "compare": false, 00:06:58.062 "compare_and_write": false, 00:06:58.062 "abort": true, 00:06:58.062 "seek_hole": false, 00:06:58.062 "seek_data": false, 00:06:58.062 "copy": true, 00:06:58.062 "nvme_iov_md": false 00:06:58.062 }, 00:06:58.062 "memory_domains": [ 00:06:58.062 { 00:06:58.062 "dma_device_id": "system", 00:06:58.062 "dma_device_type": 1 00:06:58.062 }, 00:06:58.062 { 00:06:58.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.062 "dma_device_type": 2 00:06:58.062 } 00:06:58.062 ], 00:06:58.062 "driver_specific": { 00:06:58.062 "passthru": { 00:06:58.062 "name": "Passthru0", 00:06:58.062 "base_bdev_name": "Malloc0" 00:06:58.062 } 00:06:58.062 } 00:06:58.062 } 00:06:58.062 ]' 00:06:58.062 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:58.062 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:58.062 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:58.062 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.062 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:58.062 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.062 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:58.062 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.062 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:58.062 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.062 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:58.062 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.062 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:58.062 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.062 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:58.062 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:58.062 08:38:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:58.062 00:06:58.062 real 0m0.353s 00:06:58.062 user 0m0.220s 00:06:58.062 sys 0m0.038s 00:06:58.062 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:58.062 08:38:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:58.062 ************************************ 00:06:58.062 END TEST rpc_integrity 00:06:58.062 ************************************ 00:06:58.062 08:38:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:58.062 08:38:54 rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:06:58.062 08:38:54 rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:58.062 08:38:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.321 ************************************ 00:06:58.321 START TEST rpc_plugins 00:06:58.321 ************************************ 00:06:58.321 08:38:54 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # rpc_plugins 00:06:58.321 08:38:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:58.321 08:38:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.321 08:38:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:58.321 08:38:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.321 08:38:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:58.321 08:38:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:58.321 08:38:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.321 08:38:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:58.321 08:38:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.321 08:38:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:58.321 { 00:06:58.321 "name": "Malloc1", 00:06:58.321 "aliases": [ 00:06:58.321 "7d6caee6-cba0-4698-8194-d904cf5a02d9" 00:06:58.321 ], 00:06:58.321 "product_name": "Malloc disk", 00:06:58.321 "block_size": 4096, 00:06:58.321 "num_blocks": 256, 00:06:58.321 "uuid": "7d6caee6-cba0-4698-8194-d904cf5a02d9", 00:06:58.321 "assigned_rate_limits": { 00:06:58.321 "rw_ios_per_sec": 0, 00:06:58.321 "rw_mbytes_per_sec": 0, 00:06:58.321 "r_mbytes_per_sec": 0, 00:06:58.321 "w_mbytes_per_sec": 0 00:06:58.321 }, 00:06:58.321 "claimed": false, 00:06:58.321 "zoned": false, 00:06:58.321 "supported_io_types": { 00:06:58.321 "read": true, 00:06:58.321 "write": true, 00:06:58.321 "unmap": true, 00:06:58.321 "flush": true, 00:06:58.321 "reset": true, 00:06:58.321 "nvme_admin": false, 00:06:58.321 "nvme_io": false, 00:06:58.321 "nvme_io_md": false, 00:06:58.321 "write_zeroes": true, 00:06:58.321 "zcopy": true, 00:06:58.321 "get_zone_info": false, 00:06:58.321 "zone_management": false, 00:06:58.321 "zone_append": false, 00:06:58.321 "compare": false, 00:06:58.321 "compare_and_write": false, 00:06:58.321 "abort": true, 00:06:58.321 "seek_hole": false, 00:06:58.321 "seek_data": false, 00:06:58.321 "copy": true, 00:06:58.321 "nvme_iov_md": false 00:06:58.321 }, 00:06:58.321 "memory_domains": [ 00:06:58.321 { 00:06:58.321 "dma_device_id": "system", 00:06:58.321 "dma_device_type": 1 00:06:58.321 }, 00:06:58.321 { 00:06:58.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.321 "dma_device_type": 2 00:06:58.321 } 00:06:58.321 ], 00:06:58.321 "driver_specific": {} 00:06:58.321 } 00:06:58.321 ]' 00:06:58.321 08:38:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:58.321 08:38:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:58.321 08:38:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:58.321 08:38:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.321 08:38:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:58.321 08:38:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.321 08:38:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:58.321 08:38:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.321 08:38:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:58.321 08:38:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.321 08:38:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:58.321 08:38:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:58.321 ************************************ 00:06:58.321 END TEST rpc_plugins 00:06:58.321 ************************************ 00:06:58.321 08:38:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:58.321 00:06:58.321 real 0m0.178s 00:06:58.321 user 0m0.116s 00:06:58.321 sys 0m0.020s 00:06:58.321 08:38:55 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:58.321 08:38:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:58.321 08:38:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:58.321 08:38:55 rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:06:58.321 08:38:55 rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:58.321 08:38:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.321 ************************************ 00:06:58.321 START TEST rpc_trace_cmd_test 00:06:58.321 ************************************ 00:06:58.321 08:38:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # rpc_trace_cmd_test 00:06:58.321 08:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:58.321 08:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:58.321 08:38:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.321 08:38:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.322 08:38:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.322 08:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:58.322 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56856", 00:06:58.322 "tpoint_group_mask": "0x8", 00:06:58.322 "iscsi_conn": { 00:06:58.322 "mask": "0x2", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 }, 00:06:58.322 "scsi": { 00:06:58.322 "mask": "0x4", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 }, 00:06:58.322 "bdev": { 00:06:58.322 "mask": "0x8", 00:06:58.322 "tpoint_mask": "0xffffffffffffffff" 00:06:58.322 }, 00:06:58.322 "nvmf_rdma": { 00:06:58.322 "mask": "0x10", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 }, 00:06:58.322 "nvmf_tcp": { 00:06:58.322 "mask": "0x20", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 }, 00:06:58.322 "ftl": { 00:06:58.322 "mask": "0x40", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 }, 00:06:58.322 "blobfs": { 00:06:58.322 "mask": "0x80", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 }, 00:06:58.322 "dsa": { 00:06:58.322 "mask": "0x200", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 }, 00:06:58.322 "thread": { 00:06:58.322 "mask": "0x400", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 }, 00:06:58.322 "nvme_pcie": { 00:06:58.322 "mask": "0x800", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 }, 00:06:58.322 "iaa": { 00:06:58.322 "mask": "0x1000", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 }, 00:06:58.322 "nvme_tcp": { 00:06:58.322 "mask": "0x2000", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 }, 00:06:58.322 "bdev_nvme": { 00:06:58.322 "mask": "0x4000", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 }, 00:06:58.322 "sock": { 00:06:58.322 "mask": "0x8000", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 }, 00:06:58.322 "blob": { 00:06:58.322 "mask": "0x10000", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 }, 00:06:58.322 "bdev_raid": { 00:06:58.322 "mask": "0x20000", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 }, 00:06:58.322 "scheduler": { 00:06:58.322 "mask": "0x40000", 00:06:58.322 "tpoint_mask": "0x0" 00:06:58.322 } 00:06:58.322 }' 00:06:58.322 08:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:58.580 08:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:58.580 08:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:58.580 08:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:58.580 08:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:58.580 08:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:58.580 08:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:58.580 08:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:58.580 08:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:58.580 ************************************ 00:06:58.580 END TEST rpc_trace_cmd_test 00:06:58.580 ************************************ 00:06:58.580 08:38:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:58.580 00:06:58.580 real 0m0.266s 00:06:58.580 user 0m0.218s 00:06:58.580 sys 0m0.037s 00:06:58.580 08:38:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:58.580 08:38:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.861 08:38:55 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:58.861 08:38:55 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:58.861 08:38:55 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:58.861 08:38:55 rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:06:58.861 08:38:55 rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:58.861 08:38:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.861 ************************************ 00:06:58.861 START TEST rpc_daemon_integrity 00:06:58.861 ************************************ 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # rpc_integrity 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.861 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:58.862 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.862 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:58.862 { 00:06:58.862 "name": "Malloc2", 00:06:58.862 "aliases": [ 00:06:58.862 "1d98cc59-9106-4d39-98bb-38fbcf63f0db" 00:06:58.862 ], 00:06:58.862 "product_name": "Malloc disk", 00:06:58.862 "block_size": 512, 00:06:58.862 "num_blocks": 16384, 00:06:58.862 "uuid": "1d98cc59-9106-4d39-98bb-38fbcf63f0db", 00:06:58.862 "assigned_rate_limits": { 00:06:58.862 "rw_ios_per_sec": 0, 00:06:58.862 "rw_mbytes_per_sec": 0, 00:06:58.862 "r_mbytes_per_sec": 0, 00:06:58.862 "w_mbytes_per_sec": 0 00:06:58.862 }, 00:06:58.862 "claimed": false, 00:06:58.862 "zoned": false, 00:06:58.862 "supported_io_types": { 00:06:58.862 "read": true, 00:06:58.862 "write": true, 00:06:58.862 "unmap": true, 00:06:58.862 "flush": true, 00:06:58.862 "reset": true, 00:06:58.862 "nvme_admin": false, 00:06:58.862 "nvme_io": false, 00:06:58.862 "nvme_io_md": false, 00:06:58.862 "write_zeroes": true, 00:06:58.862 "zcopy": true, 00:06:58.862 "get_zone_info": false, 00:06:58.862 "zone_management": false, 00:06:58.862 "zone_append": false, 00:06:58.862 "compare": false, 00:06:58.862 "compare_and_write": false, 00:06:58.862 "abort": true, 00:06:58.862 "seek_hole": false, 00:06:58.862 "seek_data": false, 00:06:58.862 "copy": true, 00:06:58.862 "nvme_iov_md": false 00:06:58.862 }, 00:06:58.862 "memory_domains": [ 00:06:58.862 { 00:06:58.862 "dma_device_id": "system", 00:06:58.862 "dma_device_type": 1 00:06:58.862 }, 00:06:58.862 { 00:06:58.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.862 "dma_device_type": 2 00:06:58.862 } 00:06:58.862 ], 00:06:58.862 "driver_specific": {} 00:06:58.862 } 00:06:58.862 ]' 00:06:58.862 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:58.862 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:58.862 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:58.862 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.862 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:58.862 [2024-11-27 08:38:55.531136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:58.862 [2024-11-27 08:38:55.531502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:58.862 [2024-11-27 08:38:55.531570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:58.862 [2024-11-27 08:38:55.531608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:58.862 [2024-11-27 08:38:55.535707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:58.862 [2024-11-27 08:38:55.535779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:58.862 Passthru0 00:06:58.862 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.862 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:58.862 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.862 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:58.862 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.862 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:58.862 { 00:06:58.862 "name": "Malloc2", 00:06:58.862 "aliases": [ 00:06:58.862 "1d98cc59-9106-4d39-98bb-38fbcf63f0db" 00:06:58.862 ], 00:06:58.862 "product_name": "Malloc disk", 00:06:58.862 "block_size": 512, 00:06:58.862 "num_blocks": 16384, 00:06:58.862 "uuid": "1d98cc59-9106-4d39-98bb-38fbcf63f0db", 00:06:58.862 "assigned_rate_limits": { 00:06:58.862 "rw_ios_per_sec": 0, 00:06:58.862 "rw_mbytes_per_sec": 0, 00:06:58.862 "r_mbytes_per_sec": 0, 00:06:58.862 "w_mbytes_per_sec": 0 00:06:58.862 }, 00:06:58.862 "claimed": true, 00:06:58.862 "claim_type": "exclusive_write", 00:06:58.862 "zoned": false, 00:06:58.862 "supported_io_types": { 00:06:58.862 "read": true, 00:06:58.862 "write": true, 00:06:58.862 "unmap": true, 00:06:58.862 "flush": true, 00:06:58.862 "reset": true, 00:06:58.862 "nvme_admin": false, 00:06:58.862 "nvme_io": false, 00:06:58.862 "nvme_io_md": false, 00:06:58.862 "write_zeroes": true, 00:06:58.862 "zcopy": true, 00:06:58.862 "get_zone_info": false, 00:06:58.862 "zone_management": false, 00:06:58.862 "zone_append": false, 00:06:58.862 "compare": false, 00:06:58.862 "compare_and_write": false, 00:06:58.862 "abort": true, 00:06:58.862 "seek_hole": false, 00:06:58.862 "seek_data": false, 00:06:58.862 "copy": true, 00:06:58.862 "nvme_iov_md": false 00:06:58.862 }, 00:06:58.862 "memory_domains": [ 00:06:58.862 { 00:06:58.862 "dma_device_id": "system", 00:06:58.862 "dma_device_type": 1 00:06:58.862 }, 00:06:58.862 { 00:06:58.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.862 "dma_device_type": 2 00:06:58.862 } 00:06:58.862 ], 00:06:58.862 "driver_specific": {} 00:06:58.862 }, 00:06:58.862 { 00:06:58.862 "name": "Passthru0", 00:06:58.862 "aliases": [ 00:06:58.862 "b0ddb871-10be-5d94-a8eb-a4d14b22a98c" 00:06:58.862 ], 00:06:58.862 "product_name": "passthru", 00:06:58.862 "block_size": 512, 00:06:58.862 "num_blocks": 16384, 00:06:58.862 "uuid": "b0ddb871-10be-5d94-a8eb-a4d14b22a98c", 00:06:58.862 "assigned_rate_limits": { 00:06:58.862 "rw_ios_per_sec": 0, 00:06:58.862 "rw_mbytes_per_sec": 0, 00:06:58.862 "r_mbytes_per_sec": 0, 00:06:58.862 "w_mbytes_per_sec": 0 00:06:58.863 }, 00:06:58.863 "claimed": false, 00:06:58.863 "zoned": false, 00:06:58.863 "supported_io_types": { 00:06:58.863 "read": true, 00:06:58.863 "write": true, 00:06:58.863 "unmap": true, 00:06:58.863 "flush": true, 00:06:58.863 "reset": true, 00:06:58.863 "nvme_admin": false, 00:06:58.863 "nvme_io": false, 00:06:58.863 "nvme_io_md": false, 00:06:58.863 "write_zeroes": true, 00:06:58.863 "zcopy": true, 00:06:58.863 "get_zone_info": false, 00:06:58.863 "zone_management": false, 00:06:58.863 "zone_append": false, 00:06:58.863 "compare": false, 00:06:58.863 "compare_and_write": false, 00:06:58.863 "abort": true, 00:06:58.863 "seek_hole": false, 00:06:58.863 "seek_data": false, 00:06:58.863 "copy": true, 00:06:58.863 "nvme_iov_md": false 00:06:58.863 }, 00:06:58.863 "memory_domains": [ 00:06:58.863 { 00:06:58.863 "dma_device_id": "system", 00:06:58.863 "dma_device_type": 1 00:06:58.863 }, 00:06:58.863 { 00:06:58.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.863 "dma_device_type": 2 00:06:58.863 } 00:06:58.863 ], 00:06:58.863 "driver_specific": { 00:06:58.863 "passthru": { 00:06:58.863 "name": "Passthru0", 00:06:58.863 "base_bdev_name": "Malloc2" 00:06:58.863 } 00:06:58.863 } 00:06:58.863 } 00:06:58.863 ]' 00:06:58.863 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:58.863 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:58.863 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:58.863 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.863 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.121 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.121 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:59.121 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.121 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.121 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.121 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:59.121 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.121 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.121 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.121 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:59.121 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:59.121 08:38:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:59.121 00:06:59.121 real 0m0.352s 00:06:59.121 user 0m0.211s 00:06:59.121 sys 0m0.039s 00:06:59.121 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:59.121 08:38:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:59.121 ************************************ 00:06:59.121 END TEST rpc_daemon_integrity 00:06:59.121 ************************************ 00:06:59.121 08:38:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:59.121 08:38:55 rpc -- rpc/rpc.sh@84 -- # killprocess 56856 00:06:59.121 08:38:55 rpc -- common/autotest_common.sh@951 -- # '[' -z 56856 ']' 00:06:59.121 08:38:55 rpc -- common/autotest_common.sh@955 -- # kill -0 56856 00:06:59.121 08:38:55 rpc -- common/autotest_common.sh@956 -- # uname 00:06:59.121 08:38:55 rpc -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:06:59.121 08:38:55 rpc -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 56856 00:06:59.121 killing process with pid 56856 00:06:59.121 08:38:55 rpc -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:06:59.121 08:38:55 rpc -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:06:59.121 08:38:55 rpc -- common/autotest_common.sh@969 -- # echo 'killing process with pid 56856' 00:06:59.121 08:38:55 rpc -- common/autotest_common.sh@970 -- # kill 56856 00:06:59.121 08:38:55 rpc -- common/autotest_common.sh@975 -- # wait 56856 00:07:01.651 00:07:01.651 real 0m5.134s 00:07:01.651 user 0m5.875s 00:07:01.651 sys 0m0.895s 00:07:01.651 08:38:58 rpc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:01.651 ************************************ 00:07:01.651 END TEST rpc 00:07:01.651 ************************************ 00:07:01.651 08:38:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.651 08:38:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:01.651 08:38:58 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:01.651 08:38:58 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:01.651 08:38:58 -- common/autotest_common.sh@10 -- # set +x 00:07:01.651 ************************************ 00:07:01.651 START TEST skip_rpc 00:07:01.651 ************************************ 00:07:01.651 08:38:58 skip_rpc -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:01.651 * Looking for test storage... 00:07:01.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:01.651 08:38:58 skip_rpc -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:01.651 08:38:58 skip_rpc -- common/autotest_common.sh@1690 -- # lcov --version 00:07:01.651 08:38:58 skip_rpc -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:01.651 08:38:58 skip_rpc -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.651 08:38:58 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:01.651 08:38:58 skip_rpc -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.651 08:38:58 skip_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:01.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.651 --rc genhtml_branch_coverage=1 00:07:01.651 --rc genhtml_function_coverage=1 00:07:01.651 --rc genhtml_legend=1 00:07:01.651 --rc geninfo_all_blocks=1 00:07:01.651 --rc geninfo_unexecuted_blocks=1 00:07:01.651 00:07:01.651 ' 00:07:01.651 08:38:58 skip_rpc -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:01.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.651 --rc genhtml_branch_coverage=1 00:07:01.651 --rc genhtml_function_coverage=1 00:07:01.651 --rc genhtml_legend=1 00:07:01.651 --rc geninfo_all_blocks=1 00:07:01.651 --rc geninfo_unexecuted_blocks=1 00:07:01.651 00:07:01.651 ' 00:07:01.652 08:38:58 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:01.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.652 --rc genhtml_branch_coverage=1 00:07:01.652 --rc genhtml_function_coverage=1 00:07:01.652 --rc genhtml_legend=1 00:07:01.652 --rc geninfo_all_blocks=1 00:07:01.652 --rc geninfo_unexecuted_blocks=1 00:07:01.652 00:07:01.652 ' 00:07:01.652 08:38:58 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:01.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.652 --rc genhtml_branch_coverage=1 00:07:01.652 --rc genhtml_function_coverage=1 00:07:01.652 --rc genhtml_legend=1 00:07:01.652 --rc geninfo_all_blocks=1 00:07:01.652 --rc geninfo_unexecuted_blocks=1 00:07:01.652 00:07:01.652 ' 00:07:01.652 08:38:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:01.652 08:38:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:01.652 08:38:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:01.652 08:38:58 skip_rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:01.652 08:38:58 skip_rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:01.652 08:38:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.652 ************************************ 00:07:01.652 START TEST skip_rpc 00:07:01.652 ************************************ 00:07:01.652 08:38:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # test_skip_rpc 00:07:01.652 08:38:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57085 00:07:01.652 08:38:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:01.652 08:38:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:01.652 08:38:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:01.652 [2024-11-27 08:38:58.406326] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:07:01.652 [2024-11-27 08:38:58.406570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57085 ] 00:07:01.911 [2024-11-27 08:38:58.603966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.170 [2024-11-27 08:38:58.775349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57085 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' -z 57085 ']' 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # kill -0 57085 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # uname 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 57085 00:07:07.436 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:07:07.437 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:07:07.437 killing process with pid 57085 00:07:07.437 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # echo 'killing process with pid 57085' 00:07:07.437 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # kill 57085 00:07:07.437 08:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@975 -- # wait 57085 00:07:09.338 00:07:09.338 real 0m7.438s 00:07:09.338 user 0m6.792s 00:07:09.338 sys 0m0.541s 00:07:09.338 08:39:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:09.338 ************************************ 00:07:09.338 END TEST skip_rpc 00:07:09.338 ************************************ 00:07:09.338 08:39:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.338 08:39:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:09.338 08:39:05 skip_rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:09.338 08:39:05 skip_rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:09.338 08:39:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.338 ************************************ 00:07:09.338 START TEST skip_rpc_with_json 00:07:09.338 ************************************ 00:07:09.338 08:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # test_skip_rpc_with_json 00:07:09.338 08:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:09.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.338 08:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57189 00:07:09.338 08:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:09.338 08:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.338 08:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57189 00:07:09.338 08:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # '[' -z 57189 ']' 00:07:09.338 08:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.338 08:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local max_retries=100 00:07:09.338 08:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.338 08:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@841 -- # xtrace_disable 00:07:09.338 08:39:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:09.338 [2024-11-27 08:39:05.866529] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:07:09.338 [2024-11-27 08:39:05.866695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57189 ] 00:07:09.338 [2024-11-27 08:39:06.041937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.597 [2024-11-27 08:39:06.189305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.532 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:07:10.532 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@865 -- # return 0 00:07:10.532 08:39:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:10.532 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.532 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:10.532 [2024-11-27 08:39:07.156209] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:10.532 request: 00:07:10.532 { 00:07:10.532 "trtype": "tcp", 00:07:10.532 "method": "nvmf_get_transports", 00:07:10.532 "req_id": 1 00:07:10.532 } 00:07:10.532 Got JSON-RPC error response 00:07:10.532 response: 00:07:10.532 { 00:07:10.532 "code": -19, 00:07:10.532 "message": "No such device" 00:07:10.532 } 00:07:10.532 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:10.532 08:39:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:10.532 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.532 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:10.532 [2024-11-27 08:39:07.168394] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.532 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.532 08:39:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:10.532 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.532 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:10.791 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.791 08:39:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:10.791 { 00:07:10.791 "subsystems": [ 00:07:10.791 { 00:07:10.791 "subsystem": "fsdev", 00:07:10.791 "config": [ 00:07:10.791 { 00:07:10.791 "method": "fsdev_set_opts", 00:07:10.791 "params": { 00:07:10.791 "fsdev_io_pool_size": 65535, 00:07:10.791 "fsdev_io_cache_size": 256 00:07:10.791 } 00:07:10.791 } 00:07:10.791 ] 00:07:10.791 }, 00:07:10.791 { 00:07:10.791 "subsystem": "keyring", 00:07:10.791 "config": [] 00:07:10.791 }, 00:07:10.791 { 00:07:10.791 "subsystem": "iobuf", 00:07:10.791 "config": [ 00:07:10.791 { 00:07:10.791 "method": "iobuf_set_options", 00:07:10.791 "params": { 00:07:10.791 "small_pool_count": 8192, 00:07:10.791 "large_pool_count": 1024, 00:07:10.791 "small_bufsize": 8192, 00:07:10.791 "large_bufsize": 135168, 00:07:10.791 "enable_numa": false 00:07:10.791 } 00:07:10.791 } 00:07:10.791 ] 00:07:10.791 }, 00:07:10.791 { 00:07:10.791 "subsystem": "sock", 00:07:10.791 "config": [ 00:07:10.791 { 00:07:10.791 "method": "sock_set_default_impl", 00:07:10.791 "params": { 00:07:10.791 "impl_name": "posix" 00:07:10.791 } 00:07:10.791 }, 00:07:10.791 { 00:07:10.791 "method": "sock_impl_set_options", 00:07:10.791 "params": { 00:07:10.791 "impl_name": "ssl", 00:07:10.791 "recv_buf_size": 4096, 00:07:10.791 "send_buf_size": 4096, 00:07:10.791 "enable_recv_pipe": true, 00:07:10.791 "enable_quickack": false, 00:07:10.791 "enable_placement_id": 0, 00:07:10.791 "enable_zerocopy_send_server": true, 00:07:10.791 "enable_zerocopy_send_client": false, 00:07:10.791 "zerocopy_threshold": 0, 00:07:10.791 "tls_version": 0, 00:07:10.791 "enable_ktls": false 00:07:10.791 } 00:07:10.791 }, 00:07:10.791 { 00:07:10.791 "method": "sock_impl_set_options", 00:07:10.791 "params": { 00:07:10.791 "impl_name": "posix", 00:07:10.791 "recv_buf_size": 2097152, 00:07:10.791 "send_buf_size": 2097152, 00:07:10.791 "enable_recv_pipe": true, 00:07:10.791 "enable_quickack": false, 00:07:10.792 "enable_placement_id": 0, 00:07:10.792 "enable_zerocopy_send_server": true, 00:07:10.792 "enable_zerocopy_send_client": false, 00:07:10.792 "zerocopy_threshold": 0, 00:07:10.792 "tls_version": 0, 00:07:10.792 "enable_ktls": false 00:07:10.792 } 00:07:10.792 } 00:07:10.792 ] 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "subsystem": "vmd", 00:07:10.792 "config": [] 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "subsystem": "accel", 00:07:10.792 "config": [ 00:07:10.792 { 00:07:10.792 "method": "accel_set_options", 00:07:10.792 "params": { 00:07:10.792 "small_cache_size": 128, 00:07:10.792 "large_cache_size": 16, 00:07:10.792 "task_count": 2048, 00:07:10.792 "sequence_count": 2048, 00:07:10.792 "buf_count": 2048 00:07:10.792 } 00:07:10.792 } 00:07:10.792 ] 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "subsystem": "bdev", 00:07:10.792 "config": [ 00:07:10.792 { 00:07:10.792 "method": "bdev_set_options", 00:07:10.792 "params": { 00:07:10.792 "bdev_io_pool_size": 65535, 00:07:10.792 "bdev_io_cache_size": 256, 00:07:10.792 "bdev_auto_examine": true, 00:07:10.792 "iobuf_small_cache_size": 128, 00:07:10.792 "iobuf_large_cache_size": 16 00:07:10.792 } 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "method": "bdev_raid_set_options", 00:07:10.792 "params": { 00:07:10.792 "process_window_size_kb": 1024, 00:07:10.792 "process_max_bandwidth_mb_sec": 0 00:07:10.792 } 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "method": "bdev_iscsi_set_options", 00:07:10.792 "params": { 00:07:10.792 "timeout_sec": 30 00:07:10.792 } 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "method": "bdev_nvme_set_options", 00:07:10.792 "params": { 00:07:10.792 "action_on_timeout": "none", 00:07:10.792 "timeout_us": 0, 00:07:10.792 "timeout_admin_us": 0, 00:07:10.792 "keep_alive_timeout_ms": 10000, 00:07:10.792 "arbitration_burst": 0, 00:07:10.792 "low_priority_weight": 0, 00:07:10.792 "medium_priority_weight": 0, 00:07:10.792 "high_priority_weight": 0, 00:07:10.792 "nvme_adminq_poll_period_us": 10000, 00:07:10.792 "nvme_ioq_poll_period_us": 0, 00:07:10.792 "io_queue_requests": 0, 00:07:10.792 "delay_cmd_submit": true, 00:07:10.792 "transport_retry_count": 4, 00:07:10.792 "bdev_retry_count": 3, 00:07:10.792 "transport_ack_timeout": 0, 00:07:10.792 "ctrlr_loss_timeout_sec": 0, 00:07:10.792 "reconnect_delay_sec": 0, 00:07:10.792 "fast_io_fail_timeout_sec": 0, 00:07:10.792 "disable_auto_failback": false, 00:07:10.792 "generate_uuids": false, 00:07:10.792 "transport_tos": 0, 00:07:10.792 "nvme_error_stat": false, 00:07:10.792 "rdma_srq_size": 0, 00:07:10.792 "io_path_stat": false, 00:07:10.792 "allow_accel_sequence": false, 00:07:10.792 "rdma_max_cq_size": 0, 00:07:10.792 "rdma_cm_event_timeout_ms": 0, 00:07:10.792 "dhchap_digests": [ 00:07:10.792 "sha256", 00:07:10.792 "sha384", 00:07:10.792 "sha512" 00:07:10.792 ], 00:07:10.792 "dhchap_dhgroups": [ 00:07:10.792 "null", 00:07:10.792 "ffdhe2048", 00:07:10.792 "ffdhe3072", 00:07:10.792 "ffdhe4096", 00:07:10.792 "ffdhe6144", 00:07:10.792 "ffdhe8192" 00:07:10.792 ] 00:07:10.792 } 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "method": "bdev_nvme_set_hotplug", 00:07:10.792 "params": { 00:07:10.792 "period_us": 100000, 00:07:10.792 "enable": false 00:07:10.792 } 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "method": "bdev_wait_for_examine" 00:07:10.792 } 00:07:10.792 ] 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "subsystem": "scsi", 00:07:10.792 "config": null 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "subsystem": "scheduler", 00:07:10.792 "config": [ 00:07:10.792 { 00:07:10.792 "method": "framework_set_scheduler", 00:07:10.792 "params": { 00:07:10.792 "name": "static" 00:07:10.792 } 00:07:10.792 } 00:07:10.792 ] 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "subsystem": "vhost_scsi", 00:07:10.792 "config": [] 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "subsystem": "vhost_blk", 00:07:10.792 "config": [] 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "subsystem": "ublk", 00:07:10.792 "config": [] 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "subsystem": "nbd", 00:07:10.792 "config": [] 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "subsystem": "nvmf", 00:07:10.792 "config": [ 00:07:10.792 { 00:07:10.792 "method": "nvmf_set_config", 00:07:10.792 "params": { 00:07:10.792 "discovery_filter": "match_any", 00:07:10.792 "admin_cmd_passthru": { 00:07:10.792 "identify_ctrlr": false 00:07:10.792 }, 00:07:10.792 "dhchap_digests": [ 00:07:10.792 "sha256", 00:07:10.792 "sha384", 00:07:10.792 "sha512" 00:07:10.792 ], 00:07:10.792 "dhchap_dhgroups": [ 00:07:10.792 "null", 00:07:10.792 "ffdhe2048", 00:07:10.792 "ffdhe3072", 00:07:10.792 "ffdhe4096", 00:07:10.792 "ffdhe6144", 00:07:10.792 "ffdhe8192" 00:07:10.792 ] 00:07:10.792 } 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "method": "nvmf_set_max_subsystems", 00:07:10.792 "params": { 00:07:10.792 "max_subsystems": 1024 00:07:10.792 } 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "method": "nvmf_set_crdt", 00:07:10.792 "params": { 00:07:10.792 "crdt1": 0, 00:07:10.792 "crdt2": 0, 00:07:10.792 "crdt3": 0 00:07:10.792 } 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "method": "nvmf_create_transport", 00:07:10.792 "params": { 00:07:10.792 "trtype": "TCP", 00:07:10.792 "max_queue_depth": 128, 00:07:10.792 "max_io_qpairs_per_ctrlr": 127, 00:07:10.792 "in_capsule_data_size": 4096, 00:07:10.792 "max_io_size": 131072, 00:07:10.792 "io_unit_size": 131072, 00:07:10.792 "max_aq_depth": 128, 00:07:10.792 "num_shared_buffers": 511, 00:07:10.792 "buf_cache_size": 4294967295, 00:07:10.792 "dif_insert_or_strip": false, 00:07:10.792 "zcopy": false, 00:07:10.792 "c2h_success": true, 00:07:10.792 "sock_priority": 0, 00:07:10.792 "abort_timeout_sec": 1, 00:07:10.792 "ack_timeout": 0, 00:07:10.792 "data_wr_pool_size": 0 00:07:10.792 } 00:07:10.792 } 00:07:10.792 ] 00:07:10.792 }, 00:07:10.792 { 00:07:10.792 "subsystem": "iscsi", 00:07:10.792 "config": [ 00:07:10.792 { 00:07:10.792 "method": "iscsi_set_options", 00:07:10.792 "params": { 00:07:10.792 "node_base": "iqn.2016-06.io.spdk", 00:07:10.792 "max_sessions": 128, 00:07:10.792 "max_connections_per_session": 2, 00:07:10.792 "max_queue_depth": 64, 00:07:10.792 "default_time2wait": 2, 00:07:10.792 "default_time2retain": 20, 00:07:10.792 "first_burst_length": 8192, 00:07:10.792 "immediate_data": true, 00:07:10.792 "allow_duplicated_isid": false, 00:07:10.792 "error_recovery_level": 0, 00:07:10.792 "nop_timeout": 60, 00:07:10.792 "nop_in_interval": 30, 00:07:10.792 "disable_chap": false, 00:07:10.792 "require_chap": false, 00:07:10.792 "mutual_chap": false, 00:07:10.792 "chap_group": 0, 00:07:10.792 "max_large_datain_per_connection": 64, 00:07:10.792 "max_r2t_per_connection": 4, 00:07:10.792 "pdu_pool_size": 36864, 00:07:10.792 "immediate_data_pool_size": 16384, 00:07:10.792 "data_out_pool_size": 2048 00:07:10.792 } 00:07:10.792 } 00:07:10.792 ] 00:07:10.792 } 00:07:10.792 ] 00:07:10.792 } 00:07:10.792 08:39:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:10.792 08:39:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57189 00:07:10.792 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' -z 57189 ']' 00:07:10.792 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # kill -0 57189 00:07:10.792 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # uname 00:07:10.792 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:07:10.792 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 57189 00:07:10.792 killing process with pid 57189 00:07:10.792 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:07:10.792 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:07:10.792 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # echo 'killing process with pid 57189' 00:07:10.792 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # kill 57189 00:07:10.792 08:39:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@975 -- # wait 57189 00:07:13.325 08:39:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57245 00:07:13.325 08:39:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:13.325 08:39:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:18.619 08:39:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57245 00:07:18.619 08:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' -z 57245 ']' 00:07:18.619 08:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # kill -0 57245 00:07:18.619 08:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # uname 00:07:18.619 08:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:07:18.619 08:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 57245 00:07:18.619 killing process with pid 57245 00:07:18.619 08:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:07:18.619 08:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:07:18.619 08:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # echo 'killing process with pid 57245' 00:07:18.619 08:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # kill 57245 00:07:18.619 08:39:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@975 -- # wait 57245 00:07:20.519 08:39:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:20.519 00:07:20.519 real 0m11.250s 00:07:20.519 user 0m10.531s 00:07:20.519 sys 0m1.157s 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:20.519 ************************************ 00:07:20.519 END TEST skip_rpc_with_json 00:07:20.519 ************************************ 00:07:20.519 08:39:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:20.519 08:39:17 skip_rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:20.519 08:39:17 skip_rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:20.519 08:39:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.519 ************************************ 00:07:20.519 START TEST skip_rpc_with_delay 00:07:20.519 ************************************ 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # test_skip_rpc_with_delay 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:20.519 [2024-11-27 08:39:17.197034] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.519 00:07:20.519 real 0m0.206s 00:07:20.519 user 0m0.122s 00:07:20.519 sys 0m0.081s 00:07:20.519 ************************************ 00:07:20.519 END TEST skip_rpc_with_delay 00:07:20.519 ************************************ 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:20.519 08:39:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:20.778 08:39:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:20.778 08:39:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:20.778 08:39:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:20.778 08:39:17 skip_rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:20.778 08:39:17 skip_rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:20.778 08:39:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.778 ************************************ 00:07:20.778 START TEST exit_on_failed_rpc_init 00:07:20.778 ************************************ 00:07:20.778 08:39:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # test_exit_on_failed_rpc_init 00:07:20.778 08:39:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57378 00:07:20.778 08:39:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57378 00:07:20.778 08:39:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.778 08:39:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # '[' -z 57378 ']' 00:07:20.778 08:39:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.778 08:39:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local max_retries=100 00:07:20.778 08:39:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.778 08:39:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@841 -- # xtrace_disable 00:07:20.778 08:39:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:20.778 [2024-11-27 08:39:17.475393] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:07:20.778 [2024-11-27 08:39:17.475593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57378 ] 00:07:21.035 [2024-11-27 08:39:17.668686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.292 [2024-11-27 08:39:17.819976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.284 08:39:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:07:22.284 08:39:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@865 -- # return 0 00:07:22.284 08:39:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:22.284 08:39:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:22.284 08:39:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:22.284 08:39:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:22.284 08:39:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:22.284 08:39:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.284 08:39:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:22.284 08:39:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.284 08:39:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:22.284 08:39:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.284 08:39:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:22.284 08:39:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:22.284 08:39:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:22.284 [2024-11-27 08:39:18.868445] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:07:22.284 [2024-11-27 08:39:18.868604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57402 ] 00:07:22.557 [2024-11-27 08:39:19.049187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.557 [2024-11-27 08:39:19.205525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.557 [2024-11-27 08:39:19.205669] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:22.557 [2024-11-27 08:39:19.205691] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:22.557 [2024-11-27 08:39:19.205709] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57378 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' -z 57378 ']' 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # kill -0 57378 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # uname 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 57378 00:07:22.816 killing process with pid 57378 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # echo 'killing process with pid 57378' 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # kill 57378 00:07:22.816 08:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@975 -- # wait 57378 00:07:25.350 ************************************ 00:07:25.350 END TEST exit_on_failed_rpc_init 00:07:25.350 ************************************ 00:07:25.350 00:07:25.350 real 0m4.712s 00:07:25.350 user 0m5.085s 00:07:25.350 sys 0m0.783s 00:07:25.350 08:39:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:25.350 08:39:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:25.350 08:39:22 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:25.350 00:07:25.350 real 0m24.033s 00:07:25.350 user 0m22.715s 00:07:25.350 sys 0m2.789s 00:07:25.350 08:39:22 skip_rpc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:25.350 08:39:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.350 ************************************ 00:07:25.350 END TEST skip_rpc 00:07:25.350 ************************************ 00:07:25.610 08:39:22 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:25.610 08:39:22 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:25.610 08:39:22 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:25.610 08:39:22 -- common/autotest_common.sh@10 -- # set +x 00:07:25.610 ************************************ 00:07:25.610 START TEST rpc_client 00:07:25.610 ************************************ 00:07:25.610 08:39:22 rpc_client -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:25.610 * Looking for test storage... 00:07:25.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:25.610 08:39:22 rpc_client -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:25.610 08:39:22 rpc_client -- common/autotest_common.sh@1690 -- # lcov --version 00:07:25.610 08:39:22 rpc_client -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:25.610 08:39:22 rpc_client -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.610 08:39:22 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:25.610 08:39:22 rpc_client -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.610 08:39:22 rpc_client -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:25.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.610 --rc genhtml_branch_coverage=1 00:07:25.610 --rc genhtml_function_coverage=1 00:07:25.610 --rc genhtml_legend=1 00:07:25.610 --rc geninfo_all_blocks=1 00:07:25.610 --rc geninfo_unexecuted_blocks=1 00:07:25.610 00:07:25.610 ' 00:07:25.610 08:39:22 rpc_client -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:25.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.610 --rc genhtml_branch_coverage=1 00:07:25.610 --rc genhtml_function_coverage=1 00:07:25.610 --rc genhtml_legend=1 00:07:25.610 --rc geninfo_all_blocks=1 00:07:25.610 --rc geninfo_unexecuted_blocks=1 00:07:25.610 00:07:25.610 ' 00:07:25.610 08:39:22 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:25.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.610 --rc genhtml_branch_coverage=1 00:07:25.610 --rc genhtml_function_coverage=1 00:07:25.610 --rc genhtml_legend=1 00:07:25.610 --rc geninfo_all_blocks=1 00:07:25.610 --rc geninfo_unexecuted_blocks=1 00:07:25.610 00:07:25.610 ' 00:07:25.610 08:39:22 rpc_client -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:25.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.610 --rc genhtml_branch_coverage=1 00:07:25.610 --rc genhtml_function_coverage=1 00:07:25.610 --rc genhtml_legend=1 00:07:25.610 --rc geninfo_all_blocks=1 00:07:25.610 --rc geninfo_unexecuted_blocks=1 00:07:25.610 00:07:25.610 ' 00:07:25.610 08:39:22 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:25.869 OK 00:07:25.869 08:39:22 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:25.869 00:07:25.869 real 0m0.269s 00:07:25.869 user 0m0.154s 00:07:25.869 sys 0m0.126s 00:07:25.869 08:39:22 rpc_client -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:25.869 08:39:22 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:25.869 ************************************ 00:07:25.869 END TEST rpc_client 00:07:25.869 ************************************ 00:07:25.869 08:39:22 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:25.869 08:39:22 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:25.869 08:39:22 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:25.869 08:39:22 -- common/autotest_common.sh@10 -- # set +x 00:07:25.869 ************************************ 00:07:25.869 START TEST json_config 00:07:25.869 ************************************ 00:07:25.869 08:39:22 json_config -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:25.869 08:39:22 json_config -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:25.869 08:39:22 json_config -- common/autotest_common.sh@1690 -- # lcov --version 00:07:25.869 08:39:22 json_config -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:26.128 08:39:22 json_config -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:26.128 08:39:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.128 08:39:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.128 08:39:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.128 08:39:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.128 08:39:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.128 08:39:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.128 08:39:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.128 08:39:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.128 08:39:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.128 08:39:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.128 08:39:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.128 08:39:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:26.128 08:39:22 json_config -- scripts/common.sh@345 -- # : 1 00:07:26.128 08:39:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.129 08:39:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.129 08:39:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:26.129 08:39:22 json_config -- scripts/common.sh@353 -- # local d=1 00:07:26.129 08:39:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.129 08:39:22 json_config -- scripts/common.sh@355 -- # echo 1 00:07:26.129 08:39:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.129 08:39:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:26.129 08:39:22 json_config -- scripts/common.sh@353 -- # local d=2 00:07:26.129 08:39:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.129 08:39:22 json_config -- scripts/common.sh@355 -- # echo 2 00:07:26.129 08:39:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.129 08:39:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.129 08:39:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.129 08:39:22 json_config -- scripts/common.sh@368 -- # return 0 00:07:26.129 08:39:22 json_config -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.129 08:39:22 json_config -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:26.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.129 --rc genhtml_branch_coverage=1 00:07:26.129 --rc genhtml_function_coverage=1 00:07:26.129 --rc genhtml_legend=1 00:07:26.129 --rc geninfo_all_blocks=1 00:07:26.129 --rc geninfo_unexecuted_blocks=1 00:07:26.129 00:07:26.129 ' 00:07:26.129 08:39:22 json_config -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:26.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.129 --rc genhtml_branch_coverage=1 00:07:26.129 --rc genhtml_function_coverage=1 00:07:26.129 --rc genhtml_legend=1 00:07:26.129 --rc geninfo_all_blocks=1 00:07:26.129 --rc geninfo_unexecuted_blocks=1 00:07:26.129 00:07:26.129 ' 00:07:26.129 08:39:22 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:26.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.129 --rc genhtml_branch_coverage=1 00:07:26.129 --rc genhtml_function_coverage=1 00:07:26.129 --rc genhtml_legend=1 00:07:26.129 --rc geninfo_all_blocks=1 00:07:26.129 --rc geninfo_unexecuted_blocks=1 00:07:26.129 00:07:26.129 ' 00:07:26.129 08:39:22 json_config -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:26.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.129 --rc genhtml_branch_coverage=1 00:07:26.129 --rc genhtml_function_coverage=1 00:07:26.129 --rc genhtml_legend=1 00:07:26.129 --rc geninfo_all_blocks=1 00:07:26.129 --rc geninfo_unexecuted_blocks=1 00:07:26.129 00:07:26.129 ' 00:07:26.129 08:39:22 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f013caa0-29b3-4b77-8191-05c16480dbd7 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f013caa0-29b3-4b77-8191-05c16480dbd7 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.129 08:39:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.129 08:39:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.129 08:39:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.129 08:39:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.129 08:39:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.129 08:39:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.129 08:39:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.129 08:39:22 json_config -- paths/export.sh@5 -- # export PATH 00:07:26.129 08:39:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@51 -- # : 0 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.129 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.129 08:39:22 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.129 08:39:22 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:26.129 08:39:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:26.129 08:39:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:26.129 08:39:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:26.129 08:39:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:26.129 WARNING: No tests are enabled so not running JSON configuration tests 00:07:26.129 08:39:22 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:26.129 08:39:22 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:26.129 00:07:26.129 real 0m0.204s 00:07:26.129 user 0m0.139s 00:07:26.129 sys 0m0.069s 00:07:26.129 08:39:22 json_config -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:26.129 08:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.129 ************************************ 00:07:26.129 END TEST json_config 00:07:26.129 ************************************ 00:07:26.129 08:39:22 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:26.129 08:39:22 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:26.129 08:39:22 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:26.129 08:39:22 -- common/autotest_common.sh@10 -- # set +x 00:07:26.129 ************************************ 00:07:26.129 START TEST json_config_extra_key 00:07:26.129 ************************************ 00:07:26.129 08:39:22 json_config_extra_key -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:26.129 08:39:22 json_config_extra_key -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:26.129 08:39:22 json_config_extra_key -- common/autotest_common.sh@1690 -- # lcov --version 00:07:26.129 08:39:22 json_config_extra_key -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:26.129 08:39:22 json_config_extra_key -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.129 08:39:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:26.130 08:39:22 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.130 08:39:22 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:26.130 08:39:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:26.130 08:39:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.130 08:39:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:26.130 08:39:22 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.130 08:39:22 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.130 08:39:22 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.130 08:39:22 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:26.130 08:39:22 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.130 08:39:22 json_config_extra_key -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:26.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.130 --rc genhtml_branch_coverage=1 00:07:26.130 --rc genhtml_function_coverage=1 00:07:26.130 --rc genhtml_legend=1 00:07:26.130 --rc geninfo_all_blocks=1 00:07:26.130 --rc geninfo_unexecuted_blocks=1 00:07:26.130 00:07:26.130 ' 00:07:26.130 08:39:22 json_config_extra_key -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:26.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.130 --rc genhtml_branch_coverage=1 00:07:26.130 --rc genhtml_function_coverage=1 00:07:26.130 --rc genhtml_legend=1 00:07:26.130 --rc geninfo_all_blocks=1 00:07:26.130 --rc geninfo_unexecuted_blocks=1 00:07:26.130 00:07:26.130 ' 00:07:26.130 08:39:22 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:26.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.130 --rc genhtml_branch_coverage=1 00:07:26.130 --rc genhtml_function_coverage=1 00:07:26.130 --rc genhtml_legend=1 00:07:26.130 --rc geninfo_all_blocks=1 00:07:26.130 --rc geninfo_unexecuted_blocks=1 00:07:26.130 00:07:26.130 ' 00:07:26.130 08:39:22 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:26.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.130 --rc genhtml_branch_coverage=1 00:07:26.130 --rc genhtml_function_coverage=1 00:07:26.130 --rc genhtml_legend=1 00:07:26.130 --rc geninfo_all_blocks=1 00:07:26.130 --rc geninfo_unexecuted_blocks=1 00:07:26.130 00:07:26.130 ' 00:07:26.130 08:39:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.130 08:39:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:26.130 08:39:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.130 08:39:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.130 08:39:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.130 08:39:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.130 08:39:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.130 08:39:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.130 08:39:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.130 08:39:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.130 08:39:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.130 08:39:22 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f013caa0-29b3-4b77-8191-05c16480dbd7 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f013caa0-29b3-4b77-8191-05c16480dbd7 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.390 08:39:22 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.390 08:39:22 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.390 08:39:22 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.390 08:39:22 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.390 08:39:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.390 08:39:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.390 08:39:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.390 08:39:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:26.390 08:39:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.390 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.390 08:39:22 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.390 08:39:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:26.390 08:39:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:26.390 08:39:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:26.390 08:39:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:26.390 08:39:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:26.390 08:39:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:26.390 08:39:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:26.390 08:39:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:26.390 08:39:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:26.390 08:39:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:26.390 INFO: launching applications... 00:07:26.390 08:39:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:26.390 08:39:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:26.390 08:39:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:26.390 08:39:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:26.390 08:39:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:26.390 08:39:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:26.390 08:39:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:26.390 08:39:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:26.390 08:39:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:26.390 08:39:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57612 00:07:26.391 Waiting for target to run... 00:07:26.391 08:39:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:26.391 08:39:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57612 /var/tmp/spdk_tgt.sock 00:07:26.391 08:39:22 json_config_extra_key -- common/autotest_common.sh@832 -- # '[' -z 57612 ']' 00:07:26.391 08:39:22 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:26.391 08:39:22 json_config_extra_key -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:26.391 08:39:22 json_config_extra_key -- common/autotest_common.sh@837 -- # local max_retries=100 00:07:26.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:26.391 08:39:22 json_config_extra_key -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:26.391 08:39:22 json_config_extra_key -- common/autotest_common.sh@841 -- # xtrace_disable 00:07:26.391 08:39:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:26.391 [2024-11-27 08:39:23.037022] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:07:26.391 [2024-11-27 08:39:23.037234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57612 ] 00:07:26.958 [2024-11-27 08:39:23.526939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.958 [2024-11-27 08:39:23.676107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.895 08:39:24 json_config_extra_key -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:07:27.895 08:39:24 json_config_extra_key -- common/autotest_common.sh@865 -- # return 0 00:07:27.895 08:39:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:27.895 00:07:27.895 INFO: shutting down applications... 00:07:27.895 08:39:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:27.895 08:39:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:27.895 08:39:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:27.895 08:39:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:27.895 08:39:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57612 ]] 00:07:27.895 08:39:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57612 00:07:27.895 08:39:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:27.895 08:39:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:27.895 08:39:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57612 00:07:27.895 08:39:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:28.154 08:39:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:28.154 08:39:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:28.154 08:39:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57612 00:07:28.154 08:39:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:28.760 08:39:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:28.760 08:39:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:28.760 08:39:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57612 00:07:28.760 08:39:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:29.328 08:39:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:29.329 08:39:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:29.329 08:39:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57612 00:07:29.329 08:39:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:29.896 08:39:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:29.896 08:39:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:29.896 08:39:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57612 00:07:29.896 08:39:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:30.156 08:39:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:30.156 08:39:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:30.156 08:39:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57612 00:07:30.156 08:39:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:30.724 08:39:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:30.724 08:39:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:30.724 08:39:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57612 00:07:30.724 08:39:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:30.724 08:39:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:30.725 08:39:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:30.725 SPDK target shutdown done 00:07:30.725 08:39:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:30.725 Success 00:07:30.725 08:39:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:30.725 ************************************ 00:07:30.725 END TEST json_config_extra_key 00:07:30.725 ************************************ 00:07:30.725 00:07:30.725 real 0m4.703s 00:07:30.725 user 0m4.374s 00:07:30.725 sys 0m0.725s 00:07:30.725 08:39:27 json_config_extra_key -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:30.725 08:39:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:30.725 08:39:27 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:30.725 08:39:27 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:30.725 08:39:27 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:30.725 08:39:27 -- common/autotest_common.sh@10 -- # set +x 00:07:30.725 ************************************ 00:07:30.725 START TEST alias_rpc 00:07:30.725 ************************************ 00:07:30.725 08:39:27 alias_rpc -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:30.984 * Looking for test storage... 00:07:30.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:30.984 08:39:27 alias_rpc -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:30.984 08:39:27 alias_rpc -- common/autotest_common.sh@1690 -- # lcov --version 00:07:30.984 08:39:27 alias_rpc -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:30.984 08:39:27 alias_rpc -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.984 08:39:27 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:30.984 08:39:27 alias_rpc -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.984 08:39:27 alias_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:30.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.984 --rc genhtml_branch_coverage=1 00:07:30.984 --rc genhtml_function_coverage=1 00:07:30.984 --rc genhtml_legend=1 00:07:30.984 --rc geninfo_all_blocks=1 00:07:30.984 --rc geninfo_unexecuted_blocks=1 00:07:30.984 00:07:30.984 ' 00:07:30.984 08:39:27 alias_rpc -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:30.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.984 --rc genhtml_branch_coverage=1 00:07:30.984 --rc genhtml_function_coverage=1 00:07:30.984 --rc genhtml_legend=1 00:07:30.984 --rc geninfo_all_blocks=1 00:07:30.984 --rc geninfo_unexecuted_blocks=1 00:07:30.984 00:07:30.984 ' 00:07:30.984 08:39:27 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:30.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.984 --rc genhtml_branch_coverage=1 00:07:30.984 --rc genhtml_function_coverage=1 00:07:30.984 --rc genhtml_legend=1 00:07:30.984 --rc geninfo_all_blocks=1 00:07:30.984 --rc geninfo_unexecuted_blocks=1 00:07:30.984 00:07:30.984 ' 00:07:30.984 08:39:27 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:30.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.984 --rc genhtml_branch_coverage=1 00:07:30.984 --rc genhtml_function_coverage=1 00:07:30.984 --rc genhtml_legend=1 00:07:30.984 --rc geninfo_all_blocks=1 00:07:30.984 --rc geninfo_unexecuted_blocks=1 00:07:30.984 00:07:30.984 ' 00:07:30.984 08:39:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:30.984 08:39:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57718 00:07:30.984 08:39:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57718 00:07:30.984 08:39:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:30.984 08:39:27 alias_rpc -- common/autotest_common.sh@832 -- # '[' -z 57718 ']' 00:07:30.984 08:39:27 alias_rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.984 08:39:27 alias_rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:07:30.984 08:39:27 alias_rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.984 08:39:27 alias_rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:07:30.984 08:39:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.243 [2024-11-27 08:39:27.792890] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:07:31.243 [2024-11-27 08:39:27.793633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57718 ] 00:07:31.243 [2024-11-27 08:39:27.978628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.502 [2024-11-27 08:39:28.115123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.440 08:39:29 alias_rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:07:32.440 08:39:29 alias_rpc -- common/autotest_common.sh@865 -- # return 0 00:07:32.440 08:39:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:32.698 08:39:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57718 00:07:32.698 08:39:29 alias_rpc -- common/autotest_common.sh@951 -- # '[' -z 57718 ']' 00:07:32.698 08:39:29 alias_rpc -- common/autotest_common.sh@955 -- # kill -0 57718 00:07:32.698 08:39:29 alias_rpc -- common/autotest_common.sh@956 -- # uname 00:07:32.698 08:39:29 alias_rpc -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:07:32.698 08:39:29 alias_rpc -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 57718 00:07:32.698 killing process with pid 57718 00:07:32.698 08:39:29 alias_rpc -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:07:32.698 08:39:29 alias_rpc -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:07:32.698 08:39:29 alias_rpc -- common/autotest_common.sh@969 -- # echo 'killing process with pid 57718' 00:07:32.698 08:39:29 alias_rpc -- common/autotest_common.sh@970 -- # kill 57718 00:07:32.698 08:39:29 alias_rpc -- common/autotest_common.sh@975 -- # wait 57718 00:07:35.303 ************************************ 00:07:35.303 END TEST alias_rpc 00:07:35.303 ************************************ 00:07:35.303 00:07:35.303 real 0m4.255s 00:07:35.303 user 0m4.248s 00:07:35.303 sys 0m0.767s 00:07:35.303 08:39:31 alias_rpc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:35.303 08:39:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.303 08:39:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:35.303 08:39:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:35.303 08:39:31 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:35.303 08:39:31 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:35.303 08:39:31 -- common/autotest_common.sh@10 -- # set +x 00:07:35.303 ************************************ 00:07:35.303 START TEST spdkcli_tcp 00:07:35.303 ************************************ 00:07:35.303 08:39:31 spdkcli_tcp -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:35.303 * Looking for test storage... 00:07:35.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:35.303 08:39:31 spdkcli_tcp -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:35.303 08:39:31 spdkcli_tcp -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:35.303 08:39:31 spdkcli_tcp -- common/autotest_common.sh@1690 -- # lcov --version 00:07:35.303 08:39:31 spdkcli_tcp -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.303 08:39:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.304 08:39:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.304 08:39:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:35.304 08:39:31 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.304 08:39:31 spdkcli_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:35.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.304 --rc genhtml_branch_coverage=1 00:07:35.304 --rc genhtml_function_coverage=1 00:07:35.304 --rc genhtml_legend=1 00:07:35.304 --rc geninfo_all_blocks=1 00:07:35.304 --rc geninfo_unexecuted_blocks=1 00:07:35.304 00:07:35.304 ' 00:07:35.304 08:39:31 spdkcli_tcp -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:35.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.304 --rc genhtml_branch_coverage=1 00:07:35.304 --rc genhtml_function_coverage=1 00:07:35.304 --rc genhtml_legend=1 00:07:35.304 --rc geninfo_all_blocks=1 00:07:35.304 --rc geninfo_unexecuted_blocks=1 00:07:35.304 00:07:35.304 ' 00:07:35.304 08:39:31 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:35.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.304 --rc genhtml_branch_coverage=1 00:07:35.304 --rc genhtml_function_coverage=1 00:07:35.304 --rc genhtml_legend=1 00:07:35.304 --rc geninfo_all_blocks=1 00:07:35.304 --rc geninfo_unexecuted_blocks=1 00:07:35.304 00:07:35.304 ' 00:07:35.304 08:39:31 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:35.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.304 --rc genhtml_branch_coverage=1 00:07:35.304 --rc genhtml_function_coverage=1 00:07:35.304 --rc genhtml_legend=1 00:07:35.304 --rc geninfo_all_blocks=1 00:07:35.304 --rc geninfo_unexecuted_blocks=1 00:07:35.304 00:07:35.304 ' 00:07:35.304 08:39:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:35.304 08:39:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:35.304 08:39:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:35.304 08:39:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:35.304 08:39:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:35.304 08:39:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:35.304 08:39:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:35.304 08:39:31 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.304 08:39:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:35.304 08:39:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57836 00:07:35.304 08:39:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:35.304 08:39:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57836 00:07:35.304 08:39:31 spdkcli_tcp -- common/autotest_common.sh@832 -- # '[' -z 57836 ']' 00:07:35.304 08:39:31 spdkcli_tcp -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.304 08:39:31 spdkcli_tcp -- common/autotest_common.sh@837 -- # local max_retries=100 00:07:35.304 08:39:31 spdkcli_tcp -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.304 08:39:31 spdkcli_tcp -- common/autotest_common.sh@841 -- # xtrace_disable 00:07:35.304 08:39:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:35.561 [2024-11-27 08:39:32.116596] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:07:35.561 [2024-11-27 08:39:32.117037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57836 ] 00:07:35.561 [2024-11-27 08:39:32.306682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:35.819 [2024-11-27 08:39:32.457889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.820 [2024-11-27 08:39:32.457899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.753 08:39:33 spdkcli_tcp -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:07:36.753 08:39:33 spdkcli_tcp -- common/autotest_common.sh@865 -- # return 0 00:07:36.754 08:39:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57853 00:07:36.754 08:39:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:36.754 08:39:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:37.013 [ 00:07:37.013 "bdev_malloc_delete", 00:07:37.013 "bdev_malloc_create", 00:07:37.013 "bdev_null_resize", 00:07:37.013 "bdev_null_delete", 00:07:37.013 "bdev_null_create", 00:07:37.013 "bdev_nvme_cuse_unregister", 00:07:37.013 "bdev_nvme_cuse_register", 00:07:37.013 "bdev_opal_new_user", 00:07:37.013 "bdev_opal_set_lock_state", 00:07:37.013 "bdev_opal_delete", 00:07:37.013 "bdev_opal_get_info", 00:07:37.013 "bdev_opal_create", 00:07:37.013 "bdev_nvme_opal_revert", 00:07:37.013 "bdev_nvme_opal_init", 00:07:37.013 "bdev_nvme_send_cmd", 00:07:37.013 "bdev_nvme_set_keys", 00:07:37.013 "bdev_nvme_get_path_iostat", 00:07:37.013 "bdev_nvme_get_mdns_discovery_info", 00:07:37.013 "bdev_nvme_stop_mdns_discovery", 00:07:37.013 "bdev_nvme_start_mdns_discovery", 00:07:37.013 "bdev_nvme_set_multipath_policy", 00:07:37.013 "bdev_nvme_set_preferred_path", 00:07:37.013 "bdev_nvme_get_io_paths", 00:07:37.013 "bdev_nvme_remove_error_injection", 00:07:37.013 "bdev_nvme_add_error_injection", 00:07:37.013 "bdev_nvme_get_discovery_info", 00:07:37.013 "bdev_nvme_stop_discovery", 00:07:37.013 "bdev_nvme_start_discovery", 00:07:37.013 "bdev_nvme_get_controller_health_info", 00:07:37.013 "bdev_nvme_disable_controller", 00:07:37.013 "bdev_nvme_enable_controller", 00:07:37.013 "bdev_nvme_reset_controller", 00:07:37.013 "bdev_nvme_get_transport_statistics", 00:07:37.013 "bdev_nvme_apply_firmware", 00:07:37.013 "bdev_nvme_detach_controller", 00:07:37.013 "bdev_nvme_get_controllers", 00:07:37.013 "bdev_nvme_attach_controller", 00:07:37.013 "bdev_nvme_set_hotplug", 00:07:37.013 "bdev_nvme_set_options", 00:07:37.013 "bdev_passthru_delete", 00:07:37.013 "bdev_passthru_create", 00:07:37.013 "bdev_lvol_set_parent_bdev", 00:07:37.013 "bdev_lvol_set_parent", 00:07:37.013 "bdev_lvol_check_shallow_copy", 00:07:37.013 "bdev_lvol_start_shallow_copy", 00:07:37.013 "bdev_lvol_grow_lvstore", 00:07:37.013 "bdev_lvol_get_lvols", 00:07:37.013 "bdev_lvol_get_lvstores", 00:07:37.013 "bdev_lvol_delete", 00:07:37.013 "bdev_lvol_set_read_only", 00:07:37.013 "bdev_lvol_resize", 00:07:37.013 "bdev_lvol_decouple_parent", 00:07:37.013 "bdev_lvol_inflate", 00:07:37.013 "bdev_lvol_rename", 00:07:37.013 "bdev_lvol_clone_bdev", 00:07:37.013 "bdev_lvol_clone", 00:07:37.013 "bdev_lvol_snapshot", 00:07:37.013 "bdev_lvol_create", 00:07:37.013 "bdev_lvol_delete_lvstore", 00:07:37.013 "bdev_lvol_rename_lvstore", 00:07:37.013 "bdev_lvol_create_lvstore", 00:07:37.013 "bdev_raid_set_options", 00:07:37.013 "bdev_raid_remove_base_bdev", 00:07:37.013 "bdev_raid_add_base_bdev", 00:07:37.013 "bdev_raid_delete", 00:07:37.013 "bdev_raid_create", 00:07:37.013 "bdev_raid_get_bdevs", 00:07:37.013 "bdev_error_inject_error", 00:07:37.013 "bdev_error_delete", 00:07:37.013 "bdev_error_create", 00:07:37.013 "bdev_split_delete", 00:07:37.013 "bdev_split_create", 00:07:37.013 "bdev_delay_delete", 00:07:37.013 "bdev_delay_create", 00:07:37.013 "bdev_delay_update_latency", 00:07:37.013 "bdev_zone_block_delete", 00:07:37.013 "bdev_zone_block_create", 00:07:37.013 "blobfs_create", 00:07:37.013 "blobfs_detect", 00:07:37.013 "blobfs_set_cache_size", 00:07:37.013 "bdev_aio_delete", 00:07:37.013 "bdev_aio_rescan", 00:07:37.013 "bdev_aio_create", 00:07:37.013 "bdev_ftl_set_property", 00:07:37.013 "bdev_ftl_get_properties", 00:07:37.013 "bdev_ftl_get_stats", 00:07:37.013 "bdev_ftl_unmap", 00:07:37.013 "bdev_ftl_unload", 00:07:37.013 "bdev_ftl_delete", 00:07:37.013 "bdev_ftl_load", 00:07:37.013 "bdev_ftl_create", 00:07:37.013 "bdev_virtio_attach_controller", 00:07:37.013 "bdev_virtio_scsi_get_devices", 00:07:37.013 "bdev_virtio_detach_controller", 00:07:37.013 "bdev_virtio_blk_set_hotplug", 00:07:37.013 "bdev_iscsi_delete", 00:07:37.013 "bdev_iscsi_create", 00:07:37.014 "bdev_iscsi_set_options", 00:07:37.014 "accel_error_inject_error", 00:07:37.014 "ioat_scan_accel_module", 00:07:37.014 "dsa_scan_accel_module", 00:07:37.014 "iaa_scan_accel_module", 00:07:37.014 "keyring_file_remove_key", 00:07:37.014 "keyring_file_add_key", 00:07:37.014 "keyring_linux_set_options", 00:07:37.014 "fsdev_aio_delete", 00:07:37.014 "fsdev_aio_create", 00:07:37.014 "iscsi_get_histogram", 00:07:37.014 "iscsi_enable_histogram", 00:07:37.014 "iscsi_set_options", 00:07:37.014 "iscsi_get_auth_groups", 00:07:37.014 "iscsi_auth_group_remove_secret", 00:07:37.014 "iscsi_auth_group_add_secret", 00:07:37.014 "iscsi_delete_auth_group", 00:07:37.014 "iscsi_create_auth_group", 00:07:37.014 "iscsi_set_discovery_auth", 00:07:37.014 "iscsi_get_options", 00:07:37.014 "iscsi_target_node_request_logout", 00:07:37.014 "iscsi_target_node_set_redirect", 00:07:37.014 "iscsi_target_node_set_auth", 00:07:37.014 "iscsi_target_node_add_lun", 00:07:37.014 "iscsi_get_stats", 00:07:37.014 "iscsi_get_connections", 00:07:37.014 "iscsi_portal_group_set_auth", 00:07:37.014 "iscsi_start_portal_group", 00:07:37.014 "iscsi_delete_portal_group", 00:07:37.014 "iscsi_create_portal_group", 00:07:37.014 "iscsi_get_portal_groups", 00:07:37.014 "iscsi_delete_target_node", 00:07:37.014 "iscsi_target_node_remove_pg_ig_maps", 00:07:37.014 "iscsi_target_node_add_pg_ig_maps", 00:07:37.014 "iscsi_create_target_node", 00:07:37.014 "iscsi_get_target_nodes", 00:07:37.014 "iscsi_delete_initiator_group", 00:07:37.014 "iscsi_initiator_group_remove_initiators", 00:07:37.014 "iscsi_initiator_group_add_initiators", 00:07:37.014 "iscsi_create_initiator_group", 00:07:37.014 "iscsi_get_initiator_groups", 00:07:37.014 "nvmf_set_crdt", 00:07:37.014 "nvmf_set_config", 00:07:37.014 "nvmf_set_max_subsystems", 00:07:37.014 "nvmf_stop_mdns_prr", 00:07:37.014 "nvmf_publish_mdns_prr", 00:07:37.014 "nvmf_subsystem_get_listeners", 00:07:37.014 "nvmf_subsystem_get_qpairs", 00:07:37.014 "nvmf_subsystem_get_controllers", 00:07:37.014 "nvmf_get_stats", 00:07:37.014 "nvmf_get_transports", 00:07:37.014 "nvmf_create_transport", 00:07:37.014 "nvmf_get_targets", 00:07:37.014 "nvmf_delete_target", 00:07:37.014 "nvmf_create_target", 00:07:37.014 "nvmf_subsystem_allow_any_host", 00:07:37.014 "nvmf_subsystem_set_keys", 00:07:37.014 "nvmf_subsystem_remove_host", 00:07:37.014 "nvmf_subsystem_add_host", 00:07:37.014 "nvmf_ns_remove_host", 00:07:37.014 "nvmf_ns_add_host", 00:07:37.014 "nvmf_subsystem_remove_ns", 00:07:37.014 "nvmf_subsystem_set_ns_ana_group", 00:07:37.014 "nvmf_subsystem_add_ns", 00:07:37.014 "nvmf_subsystem_listener_set_ana_state", 00:07:37.014 "nvmf_discovery_get_referrals", 00:07:37.014 "nvmf_discovery_remove_referral", 00:07:37.014 "nvmf_discovery_add_referral", 00:07:37.014 "nvmf_subsystem_remove_listener", 00:07:37.014 "nvmf_subsystem_add_listener", 00:07:37.014 "nvmf_delete_subsystem", 00:07:37.014 "nvmf_create_subsystem", 00:07:37.014 "nvmf_get_subsystems", 00:07:37.014 "env_dpdk_get_mem_stats", 00:07:37.014 "nbd_get_disks", 00:07:37.014 "nbd_stop_disk", 00:07:37.014 "nbd_start_disk", 00:07:37.014 "ublk_recover_disk", 00:07:37.014 "ublk_get_disks", 00:07:37.014 "ublk_stop_disk", 00:07:37.014 "ublk_start_disk", 00:07:37.014 "ublk_destroy_target", 00:07:37.014 "ublk_create_target", 00:07:37.014 "virtio_blk_create_transport", 00:07:37.014 "virtio_blk_get_transports", 00:07:37.014 "vhost_controller_set_coalescing", 00:07:37.014 "vhost_get_controllers", 00:07:37.014 "vhost_delete_controller", 00:07:37.014 "vhost_create_blk_controller", 00:07:37.014 "vhost_scsi_controller_remove_target", 00:07:37.014 "vhost_scsi_controller_add_target", 00:07:37.014 "vhost_start_scsi_controller", 00:07:37.014 "vhost_create_scsi_controller", 00:07:37.014 "thread_set_cpumask", 00:07:37.014 "scheduler_set_options", 00:07:37.014 "framework_get_governor", 00:07:37.014 "framework_get_scheduler", 00:07:37.014 "framework_set_scheduler", 00:07:37.014 "framework_get_reactors", 00:07:37.014 "thread_get_io_channels", 00:07:37.014 "thread_get_pollers", 00:07:37.014 "thread_get_stats", 00:07:37.014 "framework_monitor_context_switch", 00:07:37.014 "spdk_kill_instance", 00:07:37.014 "log_enable_timestamps", 00:07:37.014 "log_get_flags", 00:07:37.014 "log_clear_flag", 00:07:37.014 "log_set_flag", 00:07:37.014 "log_get_level", 00:07:37.014 "log_set_level", 00:07:37.014 "log_get_print_level", 00:07:37.014 "log_set_print_level", 00:07:37.014 "framework_enable_cpumask_locks", 00:07:37.014 "framework_disable_cpumask_locks", 00:07:37.014 "framework_wait_init", 00:07:37.014 "framework_start_init", 00:07:37.014 "scsi_get_devices", 00:07:37.014 "bdev_get_histogram", 00:07:37.014 "bdev_enable_histogram", 00:07:37.014 "bdev_set_qos_limit", 00:07:37.014 "bdev_set_qd_sampling_period", 00:07:37.014 "bdev_get_bdevs", 00:07:37.014 "bdev_reset_iostat", 00:07:37.014 "bdev_get_iostat", 00:07:37.014 "bdev_examine", 00:07:37.014 "bdev_wait_for_examine", 00:07:37.014 "bdev_set_options", 00:07:37.014 "accel_get_stats", 00:07:37.014 "accel_set_options", 00:07:37.014 "accel_set_driver", 00:07:37.014 "accel_crypto_key_destroy", 00:07:37.014 "accel_crypto_keys_get", 00:07:37.014 "accel_crypto_key_create", 00:07:37.014 "accel_assign_opc", 00:07:37.014 "accel_get_module_info", 00:07:37.014 "accel_get_opc_assignments", 00:07:37.014 "vmd_rescan", 00:07:37.014 "vmd_remove_device", 00:07:37.014 "vmd_enable", 00:07:37.014 "sock_get_default_impl", 00:07:37.014 "sock_set_default_impl", 00:07:37.014 "sock_impl_set_options", 00:07:37.014 "sock_impl_get_options", 00:07:37.014 "iobuf_get_stats", 00:07:37.014 "iobuf_set_options", 00:07:37.014 "keyring_get_keys", 00:07:37.014 "framework_get_pci_devices", 00:07:37.014 "framework_get_config", 00:07:37.014 "framework_get_subsystems", 00:07:37.014 "fsdev_set_opts", 00:07:37.014 "fsdev_get_opts", 00:07:37.014 "trace_get_info", 00:07:37.014 "trace_get_tpoint_group_mask", 00:07:37.014 "trace_disable_tpoint_group", 00:07:37.014 "trace_enable_tpoint_group", 00:07:37.014 "trace_clear_tpoint_mask", 00:07:37.014 "trace_set_tpoint_mask", 00:07:37.014 "notify_get_notifications", 00:07:37.014 "notify_get_types", 00:07:37.014 "spdk_get_version", 00:07:37.014 "rpc_get_methods" 00:07:37.014 ] 00:07:37.014 08:39:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:37.014 08:39:33 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.014 08:39:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.014 08:39:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:37.014 08:39:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57836 00:07:37.014 08:39:33 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' -z 57836 ']' 00:07:37.014 08:39:33 spdkcli_tcp -- common/autotest_common.sh@955 -- # kill -0 57836 00:07:37.014 08:39:33 spdkcli_tcp -- common/autotest_common.sh@956 -- # uname 00:07:37.014 08:39:33 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:07:37.014 08:39:33 spdkcli_tcp -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 57836 00:07:37.274 killing process with pid 57836 00:07:37.274 08:39:33 spdkcli_tcp -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:07:37.274 08:39:33 spdkcli_tcp -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:07:37.274 08:39:33 spdkcli_tcp -- common/autotest_common.sh@969 -- # echo 'killing process with pid 57836' 00:07:37.274 08:39:33 spdkcli_tcp -- common/autotest_common.sh@970 -- # kill 57836 00:07:37.274 08:39:33 spdkcli_tcp -- common/autotest_common.sh@975 -- # wait 57836 00:07:39.807 ************************************ 00:07:39.807 END TEST spdkcli_tcp 00:07:39.807 ************************************ 00:07:39.807 00:07:39.807 real 0m4.509s 00:07:39.807 user 0m7.975s 00:07:39.807 sys 0m0.843s 00:07:39.807 08:39:36 spdkcli_tcp -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:39.807 08:39:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.807 08:39:36 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:39.807 08:39:36 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:39.807 08:39:36 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:39.807 08:39:36 -- common/autotest_common.sh@10 -- # set +x 00:07:39.807 ************************************ 00:07:39.807 START TEST dpdk_mem_utility 00:07:39.807 ************************************ 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:39.807 * Looking for test storage... 00:07:39.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # lcov --version 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.807 08:39:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:39.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.807 --rc genhtml_branch_coverage=1 00:07:39.807 --rc genhtml_function_coverage=1 00:07:39.807 --rc genhtml_legend=1 00:07:39.807 --rc geninfo_all_blocks=1 00:07:39.807 --rc geninfo_unexecuted_blocks=1 00:07:39.807 00:07:39.807 ' 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:39.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.807 --rc genhtml_branch_coverage=1 00:07:39.807 --rc genhtml_function_coverage=1 00:07:39.807 --rc genhtml_legend=1 00:07:39.807 --rc geninfo_all_blocks=1 00:07:39.807 --rc geninfo_unexecuted_blocks=1 00:07:39.807 00:07:39.807 ' 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:39.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.807 --rc genhtml_branch_coverage=1 00:07:39.807 --rc genhtml_function_coverage=1 00:07:39.807 --rc genhtml_legend=1 00:07:39.807 --rc geninfo_all_blocks=1 00:07:39.807 --rc geninfo_unexecuted_blocks=1 00:07:39.807 00:07:39.807 ' 00:07:39.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:39.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.807 --rc genhtml_branch_coverage=1 00:07:39.807 --rc genhtml_function_coverage=1 00:07:39.807 --rc genhtml_legend=1 00:07:39.807 --rc geninfo_all_blocks=1 00:07:39.807 --rc geninfo_unexecuted_blocks=1 00:07:39.807 00:07:39.807 ' 00:07:39.807 08:39:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:39.807 08:39:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57958 00:07:39.807 08:39:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57958 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@832 -- # '[' -z 57958 ']' 00:07:39.807 08:39:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local max_retries=100 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@841 -- # xtrace_disable 00:07:39.807 08:39:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:40.066 [2024-11-27 08:39:36.666240] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:07:40.067 [2024-11-27 08:39:36.666982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57958 ] 00:07:40.326 [2024-11-27 08:39:36.854985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.326 [2024-11-27 08:39:37.014593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.271 08:39:38 dpdk_mem_utility -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:07:41.271 08:39:38 dpdk_mem_utility -- common/autotest_common.sh@865 -- # return 0 00:07:41.271 08:39:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:41.271 08:39:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:41.271 08:39:38 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.271 08:39:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:41.532 { 00:07:41.532 "filename": "/tmp/spdk_mem_dump.txt" 00:07:41.532 } 00:07:41.532 08:39:38 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.532 08:39:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:41.532 DPDK memory size 816.000000 MiB in 1 heap(s) 00:07:41.532 1 heaps totaling size 816.000000 MiB 00:07:41.532 size: 816.000000 MiB heap id: 0 00:07:41.532 end heaps---------- 00:07:41.532 9 mempools totaling size 595.772034 MiB 00:07:41.532 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:41.532 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:41.532 size: 92.545471 MiB name: bdev_io_57958 00:07:41.532 size: 50.003479 MiB name: msgpool_57958 00:07:41.532 size: 36.509338 MiB name: fsdev_io_57958 00:07:41.532 size: 21.763794 MiB name: PDU_Pool 00:07:41.532 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:41.532 size: 4.133484 MiB name: evtpool_57958 00:07:41.532 size: 0.026123 MiB name: Session_Pool 00:07:41.532 end mempools------- 00:07:41.532 6 memzones totaling size 4.142822 MiB 00:07:41.532 size: 1.000366 MiB name: RG_ring_0_57958 00:07:41.532 size: 1.000366 MiB name: RG_ring_1_57958 00:07:41.532 size: 1.000366 MiB name: RG_ring_4_57958 00:07:41.532 size: 1.000366 MiB name: RG_ring_5_57958 00:07:41.532 size: 0.125366 MiB name: RG_ring_2_57958 00:07:41.532 size: 0.015991 MiB name: RG_ring_3_57958 00:07:41.532 end memzones------- 00:07:41.532 08:39:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:41.532 heap id: 0 total size: 816.000000 MiB number of busy elements: 316 number of free elements: 18 00:07:41.532 list of free elements. size: 16.791138 MiB 00:07:41.532 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:41.532 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:41.532 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:41.532 element at address: 0x200018d00040 with size: 0.999939 MiB 00:07:41.532 element at address: 0x200019100040 with size: 0.999939 MiB 00:07:41.532 element at address: 0x200019200000 with size: 0.999084 MiB 00:07:41.533 element at address: 0x200031e00000 with size: 0.994324 MiB 00:07:41.533 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:41.533 element at address: 0x200018a00000 with size: 0.959656 MiB 00:07:41.533 element at address: 0x200019500040 with size: 0.936401 MiB 00:07:41.533 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:41.533 element at address: 0x20001ac00000 with size: 0.561462 MiB 00:07:41.533 element at address: 0x200000c00000 with size: 0.490173 MiB 00:07:41.533 element at address: 0x200018e00000 with size: 0.487976 MiB 00:07:41.533 element at address: 0x200019600000 with size: 0.485413 MiB 00:07:41.533 element at address: 0x200012c00000 with size: 0.443481 MiB 00:07:41.533 element at address: 0x200028000000 with size: 0.390442 MiB 00:07:41.533 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:41.533 list of standard malloc elements. size: 199.287964 MiB 00:07:41.533 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:41.533 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:41.533 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:07:41.533 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:07:41.533 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:41.533 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:41.533 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:07:41.533 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:41.533 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:41.533 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:07:41.533 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:41.533 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:41.533 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:41.533 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012c71880 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012c71980 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012c72080 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012c72180 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:07:41.533 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:07:41.534 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:07:41.534 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:07:41.534 element at address: 0x200028063f40 with size: 0.000244 MiB 00:07:41.534 element at address: 0x200028064040 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806af80 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806b080 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806b180 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806b280 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806b380 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806b480 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806b580 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806b680 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806b780 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806b880 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806b980 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806be80 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806c080 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806c180 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806c280 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806c380 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806c480 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806c580 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806c680 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806c780 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806c880 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806c980 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806d080 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806d180 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806d280 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806d380 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806d480 with size: 0.000244 MiB 00:07:41.534 element at address: 0x20002806d580 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806d680 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806d780 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806d880 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806d980 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806da80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806db80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806de80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806df80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806e080 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806e180 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806e280 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806e380 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806e480 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806e580 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806e680 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806e780 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806e880 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806e980 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806f080 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806f180 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806f280 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806f380 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806f480 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806f580 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806f680 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806f780 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806f880 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806f980 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:07:41.535 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:07:41.535 list of memzone associated elements. size: 599.920898 MiB 00:07:41.535 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:07:41.535 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:41.535 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:07:41.535 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:41.535 element at address: 0x200012df4740 with size: 92.045105 MiB 00:07:41.535 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57958_0 00:07:41.535 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:41.535 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57958_0 00:07:41.535 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:41.535 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57958_0 00:07:41.535 element at address: 0x2000197be900 with size: 20.255615 MiB 00:07:41.535 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:41.535 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:07:41.535 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:41.535 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:41.535 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57958_0 00:07:41.535 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:41.535 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57958 00:07:41.535 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:41.535 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57958 00:07:41.535 element at address: 0x200018efde00 with size: 1.008179 MiB 00:07:41.535 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:41.535 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:07:41.535 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:41.535 element at address: 0x200018afde00 with size: 1.008179 MiB 00:07:41.535 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:41.535 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:07:41.535 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:41.535 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:41.535 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57958 00:07:41.535 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:41.535 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57958 00:07:41.535 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:07:41.535 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57958 00:07:41.535 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:07:41.535 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57958 00:07:41.535 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:41.535 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57958 00:07:41.535 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:41.535 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57958 00:07:41.535 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:07:41.535 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:41.535 element at address: 0x200012c72280 with size: 0.500549 MiB 00:07:41.535 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:41.535 element at address: 0x20001967c440 with size: 0.250549 MiB 00:07:41.535 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:41.535 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:41.535 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57958 00:07:41.535 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:41.535 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57958 00:07:41.535 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:07:41.535 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:41.535 element at address: 0x200028064140 with size: 0.023804 MiB 00:07:41.535 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:41.535 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:41.535 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57958 00:07:41.535 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:07:41.535 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:41.535 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:41.535 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57958 00:07:41.535 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:41.535 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57958 00:07:41.535 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:41.535 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57958 00:07:41.535 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:07:41.535 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:41.535 08:39:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:41.535 08:39:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57958 00:07:41.535 08:39:38 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' -z 57958 ']' 00:07:41.535 08:39:38 dpdk_mem_utility -- common/autotest_common.sh@955 -- # kill -0 57958 00:07:41.535 08:39:38 dpdk_mem_utility -- common/autotest_common.sh@956 -- # uname 00:07:41.535 08:39:38 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:07:41.535 08:39:38 dpdk_mem_utility -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 57958 00:07:41.535 killing process with pid 57958 00:07:41.535 08:39:38 dpdk_mem_utility -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:07:41.535 08:39:38 dpdk_mem_utility -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:07:41.535 08:39:38 dpdk_mem_utility -- common/autotest_common.sh@969 -- # echo 'killing process with pid 57958' 00:07:41.535 08:39:38 dpdk_mem_utility -- common/autotest_common.sh@970 -- # kill 57958 00:07:41.535 08:39:38 dpdk_mem_utility -- common/autotest_common.sh@975 -- # wait 57958 00:07:44.067 ************************************ 00:07:44.068 END TEST dpdk_mem_utility 00:07:44.068 ************************************ 00:07:44.068 00:07:44.068 real 0m4.405s 00:07:44.068 user 0m4.327s 00:07:44.068 sys 0m0.747s 00:07:44.068 08:39:40 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:44.068 08:39:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:44.068 08:39:40 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:44.068 08:39:40 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:44.068 08:39:40 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:44.068 08:39:40 -- common/autotest_common.sh@10 -- # set +x 00:07:44.068 ************************************ 00:07:44.068 START TEST event 00:07:44.068 ************************************ 00:07:44.068 08:39:40 event -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:44.327 * Looking for test storage... 00:07:44.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:44.327 08:39:40 event -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:44.327 08:39:40 event -- common/autotest_common.sh@1690 -- # lcov --version 00:07:44.327 08:39:40 event -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:44.327 08:39:40 event -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:44.327 08:39:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.327 08:39:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.327 08:39:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.327 08:39:40 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.327 08:39:40 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.327 08:39:40 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.327 08:39:40 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.327 08:39:40 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.327 08:39:40 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.327 08:39:40 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.327 08:39:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.327 08:39:40 event -- scripts/common.sh@344 -- # case "$op" in 00:07:44.327 08:39:40 event -- scripts/common.sh@345 -- # : 1 00:07:44.327 08:39:40 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.327 08:39:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.327 08:39:40 event -- scripts/common.sh@365 -- # decimal 1 00:07:44.327 08:39:40 event -- scripts/common.sh@353 -- # local d=1 00:07:44.327 08:39:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.327 08:39:40 event -- scripts/common.sh@355 -- # echo 1 00:07:44.327 08:39:40 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.327 08:39:40 event -- scripts/common.sh@366 -- # decimal 2 00:07:44.327 08:39:40 event -- scripts/common.sh@353 -- # local d=2 00:07:44.327 08:39:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.327 08:39:40 event -- scripts/common.sh@355 -- # echo 2 00:07:44.327 08:39:40 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.327 08:39:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.327 08:39:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.327 08:39:40 event -- scripts/common.sh@368 -- # return 0 00:07:44.327 08:39:40 event -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.327 08:39:40 event -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:44.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.327 --rc genhtml_branch_coverage=1 00:07:44.327 --rc genhtml_function_coverage=1 00:07:44.327 --rc genhtml_legend=1 00:07:44.327 --rc geninfo_all_blocks=1 00:07:44.327 --rc geninfo_unexecuted_blocks=1 00:07:44.327 00:07:44.327 ' 00:07:44.327 08:39:40 event -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:44.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.327 --rc genhtml_branch_coverage=1 00:07:44.327 --rc genhtml_function_coverage=1 00:07:44.327 --rc genhtml_legend=1 00:07:44.327 --rc geninfo_all_blocks=1 00:07:44.327 --rc geninfo_unexecuted_blocks=1 00:07:44.327 00:07:44.327 ' 00:07:44.327 08:39:40 event -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:44.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.327 --rc genhtml_branch_coverage=1 00:07:44.327 --rc genhtml_function_coverage=1 00:07:44.327 --rc genhtml_legend=1 00:07:44.327 --rc geninfo_all_blocks=1 00:07:44.327 --rc geninfo_unexecuted_blocks=1 00:07:44.327 00:07:44.327 ' 00:07:44.327 08:39:40 event -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:44.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.327 --rc genhtml_branch_coverage=1 00:07:44.327 --rc genhtml_function_coverage=1 00:07:44.327 --rc genhtml_legend=1 00:07:44.327 --rc geninfo_all_blocks=1 00:07:44.327 --rc geninfo_unexecuted_blocks=1 00:07:44.327 00:07:44.327 ' 00:07:44.327 08:39:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:44.327 08:39:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:44.327 08:39:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:44.327 08:39:40 event -- common/autotest_common.sh@1102 -- # '[' 6 -le 1 ']' 00:07:44.327 08:39:40 event -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:44.327 08:39:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:44.327 ************************************ 00:07:44.327 START TEST event_perf 00:07:44.327 ************************************ 00:07:44.327 08:39:40 event.event_perf -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:44.327 Running I/O for 1 seconds...[2024-11-27 08:39:41.045723] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:07:44.327 [2024-11-27 08:39:41.046124] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58066 ] 00:07:44.585 [2024-11-27 08:39:41.243196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.844 [2024-11-27 08:39:41.400760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.844 [2024-11-27 08:39:41.401163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.844 Running I/O for 1 seconds...[2024-11-27 08:39:41.401165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.844 [2024-11-27 08:39:41.401019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.221 00:07:46.221 lcore 0: 119616 00:07:46.221 lcore 1: 119616 00:07:46.221 lcore 2: 119616 00:07:46.221 lcore 3: 119616 00:07:46.221 done. 00:07:46.221 00:07:46.221 real 0m1.684s 00:07:46.221 user 0m4.411s 00:07:46.221 sys 0m0.139s 00:07:46.221 08:39:42 event.event_perf -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:46.221 ************************************ 00:07:46.221 END TEST event_perf 00:07:46.221 ************************************ 00:07:46.221 08:39:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:46.221 08:39:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:46.221 08:39:42 event -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:07:46.221 08:39:42 event -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:46.221 08:39:42 event -- common/autotest_common.sh@10 -- # set +x 00:07:46.221 ************************************ 00:07:46.221 START TEST event_reactor 00:07:46.221 ************************************ 00:07:46.221 08:39:42 event.event_reactor -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:46.221 [2024-11-27 08:39:42.786137] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:07:46.221 [2024-11-27 08:39:42.786394] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58111 ] 00:07:46.221 [2024-11-27 08:39:42.977723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.480 [2024-11-27 08:39:43.126095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.859 test_start 00:07:47.859 oneshot 00:07:47.859 tick 100 00:07:47.859 tick 100 00:07:47.859 tick 250 00:07:47.859 tick 100 00:07:47.859 tick 100 00:07:47.859 tick 100 00:07:47.859 tick 250 00:07:47.859 tick 500 00:07:47.859 tick 100 00:07:47.859 tick 100 00:07:47.859 tick 250 00:07:47.859 tick 100 00:07:47.859 tick 100 00:07:47.859 test_end 00:07:47.859 ************************************ 00:07:47.859 END TEST event_reactor 00:07:47.859 ************************************ 00:07:47.859 00:07:47.859 real 0m1.624s 00:07:47.859 user 0m1.396s 00:07:47.859 sys 0m0.117s 00:07:47.859 08:39:44 event.event_reactor -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:47.859 08:39:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:47.859 08:39:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:47.859 08:39:44 event -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:07:47.859 08:39:44 event -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:47.859 08:39:44 event -- common/autotest_common.sh@10 -- # set +x 00:07:47.859 ************************************ 00:07:47.859 START TEST event_reactor_perf 00:07:47.859 ************************************ 00:07:47.859 08:39:44 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:47.859 [2024-11-27 08:39:44.453826] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:07:47.859 [2024-11-27 08:39:44.454245] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58152 ] 00:07:48.118 [2024-11-27 08:39:44.638287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.119 [2024-11-27 08:39:44.770431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.496 test_start 00:07:49.496 test_end 00:07:49.496 Performance: 281166 events per second 00:07:49.496 ************************************ 00:07:49.496 END TEST event_reactor_perf 00:07:49.496 ************************************ 00:07:49.496 00:07:49.496 real 0m1.599s 00:07:49.496 user 0m1.374s 00:07:49.496 sys 0m0.116s 00:07:49.496 08:39:46 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:49.496 08:39:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:49.496 08:39:46 event -- event/event.sh@49 -- # uname -s 00:07:49.496 08:39:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:49.496 08:39:46 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:49.496 08:39:46 event -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:49.496 08:39:46 event -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:49.496 08:39:46 event -- common/autotest_common.sh@10 -- # set +x 00:07:49.496 ************************************ 00:07:49.496 START TEST event_scheduler 00:07:49.496 ************************************ 00:07:49.496 08:39:46 event.event_scheduler -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:49.496 * Looking for test storage... 00:07:49.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:49.496 08:39:46 event.event_scheduler -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.496 08:39:46 event.event_scheduler -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.496 08:39:46 event.event_scheduler -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.496 08:39:46 event.event_scheduler -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.496 08:39:46 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:49.496 08:39:46 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.496 08:39:46 event.event_scheduler -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.496 --rc genhtml_branch_coverage=1 00:07:49.496 --rc genhtml_function_coverage=1 00:07:49.496 --rc genhtml_legend=1 00:07:49.496 --rc geninfo_all_blocks=1 00:07:49.496 --rc geninfo_unexecuted_blocks=1 00:07:49.496 00:07:49.496 ' 00:07:49.496 08:39:46 event.event_scheduler -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.496 --rc genhtml_branch_coverage=1 00:07:49.496 --rc genhtml_function_coverage=1 00:07:49.496 --rc genhtml_legend=1 00:07:49.496 --rc geninfo_all_blocks=1 00:07:49.496 --rc geninfo_unexecuted_blocks=1 00:07:49.496 00:07:49.496 ' 00:07:49.496 08:39:46 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.496 --rc genhtml_branch_coverage=1 00:07:49.496 --rc genhtml_function_coverage=1 00:07:49.496 --rc genhtml_legend=1 00:07:49.496 --rc geninfo_all_blocks=1 00:07:49.496 --rc geninfo_unexecuted_blocks=1 00:07:49.496 00:07:49.496 ' 00:07:49.496 08:39:46 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.496 --rc genhtml_branch_coverage=1 00:07:49.496 --rc genhtml_function_coverage=1 00:07:49.496 --rc genhtml_legend=1 00:07:49.496 --rc geninfo_all_blocks=1 00:07:49.496 --rc geninfo_unexecuted_blocks=1 00:07:49.496 00:07:49.496 ' 00:07:49.496 08:39:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:49.496 08:39:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58224 00:07:49.496 08:39:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:49.497 08:39:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:49.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.497 08:39:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58224 00:07:49.497 08:39:46 event.event_scheduler -- common/autotest_common.sh@832 -- # '[' -z 58224 ']' 00:07:49.497 08:39:46 event.event_scheduler -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.497 08:39:46 event.event_scheduler -- common/autotest_common.sh@837 -- # local max_retries=100 00:07:49.497 08:39:46 event.event_scheduler -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.497 08:39:46 event.event_scheduler -- common/autotest_common.sh@841 -- # xtrace_disable 00:07:49.497 08:39:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:49.755 [2024-11-27 08:39:46.387743] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:07:49.756 [2024-11-27 08:39:46.388461] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58224 ] 00:07:50.014 [2024-11-27 08:39:46.592577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.014 [2024-11-27 08:39:46.748764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.014 [2024-11-27 08:39:46.748875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.014 [2024-11-27 08:39:46.749034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.014 [2024-11-27 08:39:46.749735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.582 08:39:47 event.event_scheduler -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:07:50.582 08:39:47 event.event_scheduler -- common/autotest_common.sh@865 -- # return 0 00:07:50.582 08:39:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:50.582 08:39:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.582 08:39:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:50.841 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:50.841 POWER: Cannot set governor of lcore 0 to userspace 00:07:50.841 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:50.841 POWER: Cannot set governor of lcore 0 to performance 00:07:50.841 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:50.841 POWER: Cannot set governor of lcore 0 to userspace 00:07:50.841 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:50.841 POWER: Cannot set governor of lcore 0 to userspace 00:07:50.841 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:50.841 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:50.841 POWER: Unable to set Power Management Environment for lcore 0 00:07:50.841 [2024-11-27 08:39:47.344769] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:07:50.841 [2024-11-27 08:39:47.344807] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:07:50.841 [2024-11-27 08:39:47.344823] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:50.841 [2024-11-27 08:39:47.344852] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:50.841 [2024-11-27 08:39:47.344865] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:50.841 [2024-11-27 08:39:47.344880] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:50.841 08:39:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.841 08:39:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:50.841 08:39:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.841 08:39:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:51.204 [2024-11-27 08:39:47.705075] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:51.204 08:39:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.204 08:39:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:51.204 08:39:47 event.event_scheduler -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:51.204 08:39:47 event.event_scheduler -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:51.204 08:39:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:51.204 ************************************ 00:07:51.204 START TEST scheduler_create_thread 00:07:51.204 ************************************ 00:07:51.204 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # scheduler_create_thread 00:07:51.204 08:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:51.204 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 2 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 3 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 4 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 5 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 6 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 7 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 8 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 9 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 10 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.205 08:39:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:52.591 08:39:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.591 08:39:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:52.591 08:39:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:52.591 08:39:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.591 08:39:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:53.973 ************************************ 00:07:53.973 END TEST scheduler_create_thread 00:07:53.973 08:39:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.973 00:07:53.973 real 0m2.623s 00:07:53.973 user 0m0.023s 00:07:53.973 sys 0m0.004s 00:07:53.973 08:39:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:53.973 08:39:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:53.973 ************************************ 00:07:53.973 08:39:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:53.973 08:39:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58224 00:07:53.973 08:39:50 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' -z 58224 ']' 00:07:53.973 08:39:50 event.event_scheduler -- common/autotest_common.sh@955 -- # kill -0 58224 00:07:53.973 08:39:50 event.event_scheduler -- common/autotest_common.sh@956 -- # uname 00:07:53.973 08:39:50 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:07:53.973 08:39:50 event.event_scheduler -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 58224 00:07:53.973 killing process with pid 58224 00:07:53.973 08:39:50 event.event_scheduler -- common/autotest_common.sh@957 -- # process_name=reactor_2 00:07:53.973 08:39:50 event.event_scheduler -- common/autotest_common.sh@961 -- # '[' reactor_2 = sudo ']' 00:07:53.973 08:39:50 event.event_scheduler -- common/autotest_common.sh@969 -- # echo 'killing process with pid 58224' 00:07:53.973 08:39:50 event.event_scheduler -- common/autotest_common.sh@970 -- # kill 58224 00:07:53.973 08:39:50 event.event_scheduler -- common/autotest_common.sh@975 -- # wait 58224 00:07:54.232 [2024-11-27 08:39:50.821128] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:55.611 00:07:55.611 real 0m5.918s 00:07:55.611 user 0m10.087s 00:07:55.611 sys 0m0.592s 00:07:55.611 ************************************ 00:07:55.611 END TEST event_scheduler 00:07:55.611 ************************************ 00:07:55.611 08:39:51 event.event_scheduler -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:55.611 08:39:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:55.611 08:39:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:55.611 08:39:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:55.611 08:39:52 event -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:55.611 08:39:52 event -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:55.611 08:39:52 event -- common/autotest_common.sh@10 -- # set +x 00:07:55.611 ************************************ 00:07:55.611 START TEST app_repeat 00:07:55.611 ************************************ 00:07:55.611 08:39:52 event.app_repeat -- common/autotest_common.sh@1126 -- # app_repeat_test 00:07:55.611 08:39:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.611 08:39:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:55.611 08:39:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:55.611 08:39:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:55.611 08:39:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:55.611 08:39:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:55.611 08:39:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:55.611 Process app_repeat pid: 58335 00:07:55.611 spdk_app_start Round 0 00:07:55.611 08:39:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58335 00:07:55.611 08:39:52 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:55.611 08:39:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:55.611 08:39:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58335' 00:07:55.611 08:39:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:55.611 08:39:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:55.611 08:39:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58335 /var/tmp/spdk-nbd.sock 00:07:55.611 08:39:52 event.app_repeat -- common/autotest_common.sh@832 -- # '[' -z 58335 ']' 00:07:55.611 08:39:52 event.app_repeat -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:55.611 08:39:52 event.app_repeat -- common/autotest_common.sh@837 -- # local max_retries=100 00:07:55.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:55.611 08:39:52 event.app_repeat -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:55.611 08:39:52 event.app_repeat -- common/autotest_common.sh@841 -- # xtrace_disable 00:07:55.611 08:39:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:55.611 [2024-11-27 08:39:52.105896] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:07:55.611 [2024-11-27 08:39:52.106112] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58335 ] 00:07:55.611 [2024-11-27 08:39:52.292093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:55.869 [2024-11-27 08:39:52.435740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.869 [2024-11-27 08:39:52.435750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.437 08:39:53 event.app_repeat -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:07:56.437 08:39:53 event.app_repeat -- common/autotest_common.sh@865 -- # return 0 00:07:56.437 08:39:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:57.027 Malloc0 00:07:57.027 08:39:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:57.286 Malloc1 00:07:57.286 08:39:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:57.286 08:39:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.286 08:39:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:57.286 08:39:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:57.286 08:39:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:57.286 08:39:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:57.286 08:39:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:57.286 08:39:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.286 08:39:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:57.286 08:39:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:57.286 08:39:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:57.286 08:39:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:57.286 08:39:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:57.286 08:39:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:57.286 08:39:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:57.286 08:39:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:57.545 /dev/nbd0 00:07:57.545 08:39:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:57.545 08:39:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:57.545 08:39:54 event.app_repeat -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:07:57.545 08:39:54 event.app_repeat -- common/autotest_common.sh@870 -- # local i 00:07:57.545 08:39:54 event.app_repeat -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:57.545 08:39:54 event.app_repeat -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:57.545 08:39:54 event.app_repeat -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:07:57.545 08:39:54 event.app_repeat -- common/autotest_common.sh@874 -- # break 00:07:57.545 08:39:54 event.app_repeat -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:07:57.546 08:39:54 event.app_repeat -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:07:57.546 08:39:54 event.app_repeat -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:57.546 1+0 records in 00:07:57.546 1+0 records out 00:07:57.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309067 s, 13.3 MB/s 00:07:57.546 08:39:54 event.app_repeat -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:57.546 08:39:54 event.app_repeat -- common/autotest_common.sh@887 -- # size=4096 00:07:57.546 08:39:54 event.app_repeat -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:57.546 08:39:54 event.app_repeat -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:07:57.546 08:39:54 event.app_repeat -- common/autotest_common.sh@890 -- # return 0 00:07:57.546 08:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.546 08:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:57.546 08:39:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:57.804 /dev/nbd1 00:07:57.804 08:39:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:57.804 08:39:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:57.804 08:39:54 event.app_repeat -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:07:57.804 08:39:54 event.app_repeat -- common/autotest_common.sh@870 -- # local i 00:07:57.804 08:39:54 event.app_repeat -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:57.804 08:39:54 event.app_repeat -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:57.804 08:39:54 event.app_repeat -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:07:57.804 08:39:54 event.app_repeat -- common/autotest_common.sh@874 -- # break 00:07:57.804 08:39:54 event.app_repeat -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:07:57.804 08:39:54 event.app_repeat -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:07:57.804 08:39:54 event.app_repeat -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:57.804 1+0 records in 00:07:57.804 1+0 records out 00:07:57.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349088 s, 11.7 MB/s 00:07:57.805 08:39:54 event.app_repeat -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:57.805 08:39:54 event.app_repeat -- common/autotest_common.sh@887 -- # size=4096 00:07:57.805 08:39:54 event.app_repeat -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:57.805 08:39:54 event.app_repeat -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:07:57.805 08:39:54 event.app_repeat -- common/autotest_common.sh@890 -- # return 0 00:07:57.805 08:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.805 08:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:57.805 08:39:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:57.805 08:39:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.805 08:39:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:58.372 08:39:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:58.372 { 00:07:58.372 "nbd_device": "/dev/nbd0", 00:07:58.372 "bdev_name": "Malloc0" 00:07:58.372 }, 00:07:58.372 { 00:07:58.372 "nbd_device": "/dev/nbd1", 00:07:58.372 "bdev_name": "Malloc1" 00:07:58.372 } 00:07:58.372 ]' 00:07:58.372 08:39:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:58.372 08:39:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:58.372 { 00:07:58.372 "nbd_device": "/dev/nbd0", 00:07:58.372 "bdev_name": "Malloc0" 00:07:58.372 }, 00:07:58.372 { 00:07:58.372 "nbd_device": "/dev/nbd1", 00:07:58.372 "bdev_name": "Malloc1" 00:07:58.372 } 00:07:58.372 ]' 00:07:58.372 08:39:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:58.372 /dev/nbd1' 00:07:58.372 08:39:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:58.372 /dev/nbd1' 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:58.373 256+0 records in 00:07:58.373 256+0 records out 00:07:58.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00938155 s, 112 MB/s 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:58.373 256+0 records in 00:07:58.373 256+0 records out 00:07:58.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276058 s, 38.0 MB/s 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:58.373 256+0 records in 00:07:58.373 256+0 records out 00:07:58.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0330438 s, 31.7 MB/s 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.373 08:39:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:58.632 08:39:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:58.632 08:39:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:58.632 08:39:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:58.632 08:39:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.632 08:39:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.632 08:39:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:58.632 08:39:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:58.632 08:39:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.632 08:39:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.632 08:39:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:58.891 08:39:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:58.891 08:39:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:58.891 08:39:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:58.891 08:39:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.891 08:39:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.891 08:39:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:58.891 08:39:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:58.891 08:39:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.891 08:39:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:58.891 08:39:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.891 08:39:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:59.456 08:39:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:59.456 08:39:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:59.456 08:39:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:59.456 08:39:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:59.456 08:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:59.456 08:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:59.456 08:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:59.456 08:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:59.456 08:39:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:59.456 08:39:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:59.456 08:39:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:59.456 08:39:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:59.456 08:39:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:00.022 08:39:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:00.956 [2024-11-27 08:39:57.673375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:01.215 [2024-11-27 08:39:57.817163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.215 [2024-11-27 08:39:57.817184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.473 [2024-11-27 08:39:58.027578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:01.473 [2024-11-27 08:39:58.027730] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:02.849 spdk_app_start Round 1 00:08:02.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:02.849 08:39:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:02.849 08:39:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:02.849 08:39:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58335 /var/tmp/spdk-nbd.sock 00:08:02.849 08:39:59 event.app_repeat -- common/autotest_common.sh@832 -- # '[' -z 58335 ']' 00:08:02.849 08:39:59 event.app_repeat -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:02.849 08:39:59 event.app_repeat -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:02.849 08:39:59 event.app_repeat -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:02.849 08:39:59 event.app_repeat -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:02.849 08:39:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:03.109 08:39:59 event.app_repeat -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:03.109 08:39:59 event.app_repeat -- common/autotest_common.sh@865 -- # return 0 00:08:03.109 08:39:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:03.368 Malloc0 00:08:03.368 08:40:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:03.935 Malloc1 00:08:03.935 08:40:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:03.935 08:40:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.935 08:40:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:03.935 08:40:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:03.935 08:40:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.935 08:40:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:03.935 08:40:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:03.935 08:40:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.935 08:40:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:03.935 08:40:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:03.935 08:40:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.935 08:40:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:03.935 08:40:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:03.935 08:40:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:03.935 08:40:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:03.935 08:40:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:04.210 /dev/nbd0 00:08:04.210 08:40:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:04.210 08:40:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:04.210 08:40:00 event.app_repeat -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:08:04.210 08:40:00 event.app_repeat -- common/autotest_common.sh@870 -- # local i 00:08:04.210 08:40:00 event.app_repeat -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:04.210 08:40:00 event.app_repeat -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:04.210 08:40:00 event.app_repeat -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:08:04.210 08:40:00 event.app_repeat -- common/autotest_common.sh@874 -- # break 00:08:04.210 08:40:00 event.app_repeat -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:08:04.210 08:40:00 event.app_repeat -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:08:04.210 08:40:00 event.app_repeat -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:04.210 1+0 records in 00:08:04.210 1+0 records out 00:08:04.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321724 s, 12.7 MB/s 00:08:04.210 08:40:00 event.app_repeat -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:04.210 08:40:00 event.app_repeat -- common/autotest_common.sh@887 -- # size=4096 00:08:04.210 08:40:00 event.app_repeat -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:04.211 08:40:00 event.app_repeat -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:08:04.211 08:40:00 event.app_repeat -- common/autotest_common.sh@890 -- # return 0 00:08:04.211 08:40:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.211 08:40:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:04.211 08:40:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:04.469 /dev/nbd1 00:08:04.469 08:40:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:04.469 08:40:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:04.469 08:40:01 event.app_repeat -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:08:04.469 08:40:01 event.app_repeat -- common/autotest_common.sh@870 -- # local i 00:08:04.469 08:40:01 event.app_repeat -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:04.469 08:40:01 event.app_repeat -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:04.469 08:40:01 event.app_repeat -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:08:04.469 08:40:01 event.app_repeat -- common/autotest_common.sh@874 -- # break 00:08:04.469 08:40:01 event.app_repeat -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:08:04.469 08:40:01 event.app_repeat -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:08:04.469 08:40:01 event.app_repeat -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:04.469 1+0 records in 00:08:04.469 1+0 records out 00:08:04.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220042 s, 18.6 MB/s 00:08:04.469 08:40:01 event.app_repeat -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:04.469 08:40:01 event.app_repeat -- common/autotest_common.sh@887 -- # size=4096 00:08:04.469 08:40:01 event.app_repeat -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:04.469 08:40:01 event.app_repeat -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:08:04.469 08:40:01 event.app_repeat -- common/autotest_common.sh@890 -- # return 0 00:08:04.469 08:40:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.469 08:40:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:04.469 08:40:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:04.469 08:40:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.469 08:40:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:05.037 { 00:08:05.037 "nbd_device": "/dev/nbd0", 00:08:05.037 "bdev_name": "Malloc0" 00:08:05.037 }, 00:08:05.037 { 00:08:05.037 "nbd_device": "/dev/nbd1", 00:08:05.037 "bdev_name": "Malloc1" 00:08:05.037 } 00:08:05.037 ]' 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:05.037 { 00:08:05.037 "nbd_device": "/dev/nbd0", 00:08:05.037 "bdev_name": "Malloc0" 00:08:05.037 }, 00:08:05.037 { 00:08:05.037 "nbd_device": "/dev/nbd1", 00:08:05.037 "bdev_name": "Malloc1" 00:08:05.037 } 00:08:05.037 ]' 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:05.037 /dev/nbd1' 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:05.037 /dev/nbd1' 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:05.037 256+0 records in 00:08:05.037 256+0 records out 00:08:05.037 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00996389 s, 105 MB/s 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:05.037 256+0 records in 00:08:05.037 256+0 records out 00:08:05.037 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277701 s, 37.8 MB/s 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:05.037 256+0 records in 00:08:05.037 256+0 records out 00:08:05.037 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291424 s, 36.0 MB/s 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.037 08:40:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:05.296 08:40:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:05.296 08:40:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:05.296 08:40:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:05.296 08:40:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.296 08:40:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.296 08:40:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:05.296 08:40:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:05.296 08:40:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.296 08:40:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.296 08:40:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:05.864 08:40:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:05.864 08:40:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:05.864 08:40:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:05.864 08:40:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.864 08:40:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.864 08:40:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:05.864 08:40:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:05.864 08:40:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.864 08:40:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:05.864 08:40:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.864 08:40:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.123 08:40:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:06.123 08:40:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:06.123 08:40:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.123 08:40:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:06.123 08:40:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:06.123 08:40:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.123 08:40:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:06.123 08:40:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:06.123 08:40:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:06.123 08:40:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:06.123 08:40:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:06.123 08:40:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:06.123 08:40:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:06.692 08:40:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:07.629 [2024-11-27 08:40:04.296437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:07.894 [2024-11-27 08:40:04.439765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.894 [2024-11-27 08:40:04.439769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.894 [2024-11-27 08:40:04.650250] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:07.894 [2024-11-27 08:40:04.650404] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:09.797 spdk_app_start Round 2 00:08:09.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:09.797 08:40:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:09.797 08:40:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:09.797 08:40:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58335 /var/tmp/spdk-nbd.sock 00:08:09.797 08:40:06 event.app_repeat -- common/autotest_common.sh@832 -- # '[' -z 58335 ']' 00:08:09.797 08:40:06 event.app_repeat -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:09.797 08:40:06 event.app_repeat -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:09.797 08:40:06 event.app_repeat -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:09.797 08:40:06 event.app_repeat -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:09.797 08:40:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:09.797 08:40:06 event.app_repeat -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:09.797 08:40:06 event.app_repeat -- common/autotest_common.sh@865 -- # return 0 00:08:09.798 08:40:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:10.056 Malloc0 00:08:10.056 08:40:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:10.315 Malloc1 00:08:10.574 08:40:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:10.574 08:40:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.574 08:40:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:10.574 08:40:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:10.574 08:40:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:10.574 08:40:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:10.574 08:40:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:10.574 08:40:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.574 08:40:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:10.574 08:40:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:10.574 08:40:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:10.574 08:40:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:10.574 08:40:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:10.574 08:40:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:10.574 08:40:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:10.574 08:40:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:10.833 /dev/nbd0 00:08:10.833 08:40:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:10.833 08:40:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:10.833 08:40:07 event.app_repeat -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:08:10.833 08:40:07 event.app_repeat -- common/autotest_common.sh@870 -- # local i 00:08:10.833 08:40:07 event.app_repeat -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:10.833 08:40:07 event.app_repeat -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:10.833 08:40:07 event.app_repeat -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:08:10.833 08:40:07 event.app_repeat -- common/autotest_common.sh@874 -- # break 00:08:10.833 08:40:07 event.app_repeat -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:08:10.833 08:40:07 event.app_repeat -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:08:10.833 08:40:07 event.app_repeat -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:10.833 1+0 records in 00:08:10.833 1+0 records out 00:08:10.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422309 s, 9.7 MB/s 00:08:10.833 08:40:07 event.app_repeat -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:10.833 08:40:07 event.app_repeat -- common/autotest_common.sh@887 -- # size=4096 00:08:10.833 08:40:07 event.app_repeat -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:10.833 08:40:07 event.app_repeat -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:08:10.833 08:40:07 event.app_repeat -- common/autotest_common.sh@890 -- # return 0 00:08:10.833 08:40:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:10.833 08:40:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:10.833 08:40:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:11.092 /dev/nbd1 00:08:11.092 08:40:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:11.092 08:40:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:11.092 08:40:07 event.app_repeat -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:08:11.092 08:40:07 event.app_repeat -- common/autotest_common.sh@870 -- # local i 00:08:11.092 08:40:07 event.app_repeat -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:11.092 08:40:07 event.app_repeat -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:11.092 08:40:07 event.app_repeat -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:08:11.092 08:40:07 event.app_repeat -- common/autotest_common.sh@874 -- # break 00:08:11.092 08:40:07 event.app_repeat -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:08:11.092 08:40:07 event.app_repeat -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:08:11.092 08:40:07 event.app_repeat -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:11.092 1+0 records in 00:08:11.092 1+0 records out 00:08:11.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378538 s, 10.8 MB/s 00:08:11.092 08:40:07 event.app_repeat -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:11.092 08:40:07 event.app_repeat -- common/autotest_common.sh@887 -- # size=4096 00:08:11.092 08:40:07 event.app_repeat -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:11.092 08:40:07 event.app_repeat -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:08:11.092 08:40:07 event.app_repeat -- common/autotest_common.sh@890 -- # return 0 00:08:11.092 08:40:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:11.092 08:40:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:11.092 08:40:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:11.092 08:40:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.092 08:40:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:11.350 08:40:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:11.350 { 00:08:11.350 "nbd_device": "/dev/nbd0", 00:08:11.350 "bdev_name": "Malloc0" 00:08:11.350 }, 00:08:11.350 { 00:08:11.350 "nbd_device": "/dev/nbd1", 00:08:11.350 "bdev_name": "Malloc1" 00:08:11.350 } 00:08:11.350 ]' 00:08:11.350 08:40:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:11.350 { 00:08:11.350 "nbd_device": "/dev/nbd0", 00:08:11.350 "bdev_name": "Malloc0" 00:08:11.351 }, 00:08:11.351 { 00:08:11.351 "nbd_device": "/dev/nbd1", 00:08:11.351 "bdev_name": "Malloc1" 00:08:11.351 } 00:08:11.351 ]' 00:08:11.351 08:40:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:11.609 /dev/nbd1' 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:11.609 /dev/nbd1' 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:11.609 256+0 records in 00:08:11.609 256+0 records out 00:08:11.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00740904 s, 142 MB/s 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:11.609 256+0 records in 00:08:11.609 256+0 records out 00:08:11.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293819 s, 35.7 MB/s 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:11.609 256+0 records in 00:08:11.609 256+0 records out 00:08:11.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0367211 s, 28.6 MB/s 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.609 08:40:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:11.868 08:40:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:11.868 08:40:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:11.868 08:40:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:11.868 08:40:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.868 08:40:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.868 08:40:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:11.868 08:40:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:11.868 08:40:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.868 08:40:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.868 08:40:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:12.126 08:40:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:12.126 08:40:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:12.126 08:40:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:12.126 08:40:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.126 08:40:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.126 08:40:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:12.126 08:40:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:12.126 08:40:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.126 08:40:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:12.126 08:40:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.126 08:40:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:12.692 08:40:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:12.692 08:40:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:12.692 08:40:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:12.692 08:40:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:12.692 08:40:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:12.692 08:40:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:12.692 08:40:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:12.692 08:40:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:12.692 08:40:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:12.692 08:40:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:12.692 08:40:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:12.692 08:40:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:12.692 08:40:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:12.951 08:40:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:14.326 [2024-11-27 08:40:10.853717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:14.326 [2024-11-27 08:40:10.991902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.326 [2024-11-27 08:40:10.991914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.585 [2024-11-27 08:40:11.206372] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:14.585 [2024-11-27 08:40:11.206539] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:15.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:15.962 08:40:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58335 /var/tmp/spdk-nbd.sock 00:08:15.962 08:40:12 event.app_repeat -- common/autotest_common.sh@832 -- # '[' -z 58335 ']' 00:08:15.962 08:40:12 event.app_repeat -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:15.962 08:40:12 event.app_repeat -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:15.962 08:40:12 event.app_repeat -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:15.962 08:40:12 event.app_repeat -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:15.962 08:40:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:16.530 08:40:13 event.app_repeat -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:16.530 08:40:13 event.app_repeat -- common/autotest_common.sh@865 -- # return 0 00:08:16.530 08:40:13 event.app_repeat -- event/event.sh@39 -- # killprocess 58335 00:08:16.530 08:40:13 event.app_repeat -- common/autotest_common.sh@951 -- # '[' -z 58335 ']' 00:08:16.530 08:40:13 event.app_repeat -- common/autotest_common.sh@955 -- # kill -0 58335 00:08:16.530 08:40:13 event.app_repeat -- common/autotest_common.sh@956 -- # uname 00:08:16.530 08:40:13 event.app_repeat -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:08:16.530 08:40:13 event.app_repeat -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 58335 00:08:16.530 killing process with pid 58335 00:08:16.530 08:40:13 event.app_repeat -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:08:16.530 08:40:13 event.app_repeat -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:08:16.530 08:40:13 event.app_repeat -- common/autotest_common.sh@969 -- # echo 'killing process with pid 58335' 00:08:16.530 08:40:13 event.app_repeat -- common/autotest_common.sh@970 -- # kill 58335 00:08:16.530 08:40:13 event.app_repeat -- common/autotest_common.sh@975 -- # wait 58335 00:08:17.475 spdk_app_start is called in Round 0. 00:08:17.475 Shutdown signal received, stop current app iteration 00:08:17.475 Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 reinitialization... 00:08:17.475 spdk_app_start is called in Round 1. 00:08:17.475 Shutdown signal received, stop current app iteration 00:08:17.475 Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 reinitialization... 00:08:17.475 spdk_app_start is called in Round 2. 00:08:17.475 Shutdown signal received, stop current app iteration 00:08:17.475 Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 reinitialization... 00:08:17.475 spdk_app_start is called in Round 3. 00:08:17.475 Shutdown signal received, stop current app iteration 00:08:17.475 08:40:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:17.475 08:40:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:17.475 00:08:17.475 real 0m22.065s 00:08:17.475 user 0m48.724s 00:08:17.475 sys 0m3.219s 00:08:17.475 08:40:14 event.app_repeat -- common/autotest_common.sh@1127 -- # xtrace_disable 00:08:17.475 ************************************ 00:08:17.475 END TEST app_repeat 00:08:17.475 ************************************ 00:08:17.475 08:40:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:17.475 08:40:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:17.475 08:40:14 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:17.475 08:40:14 event -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:08:17.475 08:40:14 event -- common/autotest_common.sh@1108 -- # xtrace_disable 00:08:17.475 08:40:14 event -- common/autotest_common.sh@10 -- # set +x 00:08:17.475 ************************************ 00:08:17.475 START TEST cpu_locks 00:08:17.475 ************************************ 00:08:17.475 08:40:14 event.cpu_locks -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:17.734 * Looking for test storage... 00:08:17.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:17.734 08:40:14 event.cpu_locks -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:17.734 08:40:14 event.cpu_locks -- common/autotest_common.sh@1690 -- # lcov --version 00:08:17.734 08:40:14 event.cpu_locks -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:17.734 08:40:14 event.cpu_locks -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:17.734 08:40:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:17.735 08:40:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.735 08:40:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:17.735 08:40:14 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.735 08:40:14 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:17.735 08:40:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:17.735 08:40:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.735 08:40:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:17.735 08:40:14 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.735 08:40:14 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.735 08:40:14 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.735 08:40:14 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:17.735 08:40:14 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.735 08:40:14 event.cpu_locks -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:17.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.735 --rc genhtml_branch_coverage=1 00:08:17.735 --rc genhtml_function_coverage=1 00:08:17.735 --rc genhtml_legend=1 00:08:17.735 --rc geninfo_all_blocks=1 00:08:17.735 --rc geninfo_unexecuted_blocks=1 00:08:17.735 00:08:17.735 ' 00:08:17.735 08:40:14 event.cpu_locks -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:17.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.735 --rc genhtml_branch_coverage=1 00:08:17.735 --rc genhtml_function_coverage=1 00:08:17.735 --rc genhtml_legend=1 00:08:17.735 --rc geninfo_all_blocks=1 00:08:17.735 --rc geninfo_unexecuted_blocks=1 00:08:17.735 00:08:17.735 ' 00:08:17.735 08:40:14 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:17.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.735 --rc genhtml_branch_coverage=1 00:08:17.735 --rc genhtml_function_coverage=1 00:08:17.735 --rc genhtml_legend=1 00:08:17.735 --rc geninfo_all_blocks=1 00:08:17.735 --rc geninfo_unexecuted_blocks=1 00:08:17.735 00:08:17.735 ' 00:08:17.735 08:40:14 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:17.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.735 --rc genhtml_branch_coverage=1 00:08:17.735 --rc genhtml_function_coverage=1 00:08:17.735 --rc genhtml_legend=1 00:08:17.735 --rc geninfo_all_blocks=1 00:08:17.735 --rc geninfo_unexecuted_blocks=1 00:08:17.735 00:08:17.735 ' 00:08:17.735 08:40:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:17.735 08:40:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:17.735 08:40:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:17.735 08:40:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:17.735 08:40:14 event.cpu_locks -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:08:17.735 08:40:14 event.cpu_locks -- common/autotest_common.sh@1108 -- # xtrace_disable 00:08:17.735 08:40:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:17.735 ************************************ 00:08:17.735 START TEST default_locks 00:08:17.735 ************************************ 00:08:17.735 08:40:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # default_locks 00:08:17.735 08:40:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58812 00:08:17.735 08:40:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:17.735 08:40:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58812 00:08:17.735 08:40:14 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # '[' -z 58812 ']' 00:08:17.735 08:40:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.735 08:40:14 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:17.735 08:40:14 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.735 08:40:14 event.cpu_locks.default_locks -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:17.735 08:40:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:17.994 [2024-11-27 08:40:14.507235] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:08:17.994 [2024-11-27 08:40:14.507665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58812 ] 00:08:17.994 [2024-11-27 08:40:14.692014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.253 [2024-11-27 08:40:14.839264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.188 08:40:15 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:19.188 08:40:15 event.cpu_locks.default_locks -- common/autotest_common.sh@865 -- # return 0 00:08:19.188 08:40:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58812 00:08:19.188 08:40:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58812 00:08:19.188 08:40:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:19.755 08:40:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58812 00:08:19.755 08:40:16 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' -z 58812 ']' 00:08:19.755 08:40:16 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # kill -0 58812 00:08:19.755 08:40:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # uname 00:08:19.755 08:40:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:08:19.755 08:40:16 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 58812 00:08:19.755 killing process with pid 58812 00:08:19.755 08:40:16 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:08:19.755 08:40:16 event.cpu_locks.default_locks -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:08:19.755 08:40:16 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # echo 'killing process with pid 58812' 00:08:19.755 08:40:16 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # kill 58812 00:08:19.755 08:40:16 event.cpu_locks.default_locks -- common/autotest_common.sh@975 -- # wait 58812 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58812 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58812 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:22.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.291 ERROR: process (pid: 58812) is no longer running 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58812 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # '[' -z 58812 ']' 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:22.291 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 847: kill: (58812) - No such process 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@865 -- # return 1 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:22.291 00:08:22.291 real 0m4.337s 00:08:22.291 user 0m4.262s 00:08:22.291 sys 0m0.830s 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # xtrace_disable 00:08:22.291 ************************************ 00:08:22.291 END TEST default_locks 00:08:22.291 ************************************ 00:08:22.291 08:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:22.291 08:40:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:22.291 08:40:18 event.cpu_locks -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:08:22.291 08:40:18 event.cpu_locks -- common/autotest_common.sh@1108 -- # xtrace_disable 00:08:22.291 08:40:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:22.291 ************************************ 00:08:22.291 START TEST default_locks_via_rpc 00:08:22.291 ************************************ 00:08:22.291 08:40:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # default_locks_via_rpc 00:08:22.291 08:40:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58887 00:08:22.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.291 08:40:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58887 00:08:22.291 08:40:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # '[' -z 58887 ']' 00:08:22.291 08:40:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.291 08:40:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:22.291 08:40:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:22.291 08:40:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.291 08:40:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:22.291 08:40:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.291 [2024-11-27 08:40:18.893079] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:08:22.291 [2024-11-27 08:40:18.893551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58887 ] 00:08:22.559 [2024-11-27 08:40:19.086285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.559 [2024-11-27 08:40:19.260402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@865 -- # return 0 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58887 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58887 00:08:23.497 08:40:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:24.066 08:40:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58887 00:08:24.066 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' -z 58887 ']' 00:08:24.066 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # kill -0 58887 00:08:24.066 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # uname 00:08:24.066 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:08:24.066 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 58887 00:08:24.066 killing process with pid 58887 00:08:24.066 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:08:24.066 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:08:24.066 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # echo 'killing process with pid 58887' 00:08:24.066 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # kill 58887 00:08:24.066 08:40:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@975 -- # wait 58887 00:08:26.657 ************************************ 00:08:26.657 END TEST default_locks_via_rpc 00:08:26.657 ************************************ 00:08:26.657 00:08:26.657 real 0m4.362s 00:08:26.657 user 0m4.282s 00:08:26.657 sys 0m0.812s 00:08:26.657 08:40:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:08:26.657 08:40:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.657 08:40:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:26.657 08:40:23 event.cpu_locks -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:08:26.657 08:40:23 event.cpu_locks -- common/autotest_common.sh@1108 -- # xtrace_disable 00:08:26.657 08:40:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:26.657 ************************************ 00:08:26.657 START TEST non_locking_app_on_locked_coremask 00:08:26.657 ************************************ 00:08:26.657 08:40:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # non_locking_app_on_locked_coremask 00:08:26.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.657 08:40:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58976 00:08:26.657 08:40:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58976 /var/tmp/spdk.sock 00:08:26.657 08:40:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:26.657 08:40:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # '[' -z 58976 ']' 00:08:26.657 08:40:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.657 08:40:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:26.657 08:40:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.657 08:40:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:26.657 08:40:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:26.657 [2024-11-27 08:40:23.312168] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:08:26.657 [2024-11-27 08:40:23.312377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58976 ] 00:08:26.916 [2024-11-27 08:40:23.496502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.916 [2024-11-27 08:40:23.646074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.294 08:40:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:28.294 08:40:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@865 -- # return 0 00:08:28.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:28.294 08:40:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58992 00:08:28.294 08:40:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58992 /var/tmp/spdk2.sock 00:08:28.294 08:40:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # '[' -z 58992 ']' 00:08:28.294 08:40:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:28.294 08:40:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:28.294 08:40:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:28.294 08:40:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:28.294 08:40:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:28.294 08:40:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:28.294 [2024-11-27 08:40:24.740813] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:08:28.294 [2024-11-27 08:40:24.741362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58992 ] 00:08:28.294 [2024-11-27 08:40:24.963660] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:28.294 [2024-11-27 08:40:24.963787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.553 [2024-11-27 08:40:25.260530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.087 08:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:31.088 08:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@865 -- # return 0 00:08:31.088 08:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58976 00:08:31.088 08:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58976 00:08:31.088 08:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:31.655 08:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58976 00:08:31.655 08:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' -z 58976 ']' 00:08:31.655 08:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # kill -0 58976 00:08:31.655 08:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # uname 00:08:31.655 08:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:08:31.655 08:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 58976 00:08:31.655 killing process with pid 58976 00:08:31.655 08:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:08:31.655 08:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:08:31.655 08:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # echo 'killing process with pid 58976' 00:08:31.655 08:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # kill 58976 00:08:31.655 08:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@975 -- # wait 58976 00:08:36.938 08:40:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58992 00:08:36.938 08:40:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' -z 58992 ']' 00:08:36.938 08:40:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # kill -0 58992 00:08:36.938 08:40:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # uname 00:08:36.938 08:40:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:08:36.938 08:40:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 58992 00:08:36.938 killing process with pid 58992 00:08:36.938 08:40:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:08:36.938 08:40:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:08:36.938 08:40:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # echo 'killing process with pid 58992' 00:08:36.938 08:40:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # kill 58992 00:08:36.938 08:40:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@975 -- # wait 58992 00:08:38.839 ************************************ 00:08:38.839 END TEST non_locking_app_on_locked_coremask 00:08:38.839 ************************************ 00:08:38.839 00:08:38.839 real 0m12.166s 00:08:38.839 user 0m12.543s 00:08:38.839 sys 0m1.753s 00:08:38.839 08:40:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # xtrace_disable 00:08:38.839 08:40:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:38.839 08:40:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:38.839 08:40:35 event.cpu_locks -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:08:38.839 08:40:35 event.cpu_locks -- common/autotest_common.sh@1108 -- # xtrace_disable 00:08:38.839 08:40:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:38.839 ************************************ 00:08:38.839 START TEST locking_app_on_unlocked_coremask 00:08:38.839 ************************************ 00:08:38.839 08:40:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # locking_app_on_unlocked_coremask 00:08:38.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.839 08:40:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59147 00:08:38.839 08:40:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:38.839 08:40:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59147 /var/tmp/spdk.sock 00:08:38.839 08:40:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # '[' -z 59147 ']' 00:08:38.839 08:40:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.839 08:40:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:38.839 08:40:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.839 08:40:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:38.839 08:40:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:38.839 [2024-11-27 08:40:35.536490] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:08:38.839 [2024-11-27 08:40:35.537657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59147 ] 00:08:39.097 [2024-11-27 08:40:35.734205] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:39.097 [2024-11-27 08:40:35.734763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.355 [2024-11-27 08:40:35.891696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:40.288 08:40:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:40.288 08:40:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@865 -- # return 0 00:08:40.288 08:40:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59168 00:08:40.288 08:40:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59168 /var/tmp/spdk2.sock 00:08:40.288 08:40:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # '[' -z 59168 ']' 00:08:40.288 08:40:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:40.288 08:40:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:40.288 08:40:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:40.288 08:40:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:40.288 08:40:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:40.288 08:40:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:40.288 [2024-11-27 08:40:36.983969] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:08:40.288 [2024-11-27 08:40:36.984162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59168 ] 00:08:40.545 [2024-11-27 08:40:37.182019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.802 [2024-11-27 08:40:37.513455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.334 08:40:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:43.334 08:40:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@865 -- # return 0 00:08:43.334 08:40:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59168 00:08:43.334 08:40:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59168 00:08:43.334 08:40:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:43.903 08:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59147 00:08:43.903 08:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' -z 59147 ']' 00:08:43.903 08:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # kill -0 59147 00:08:43.903 08:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # uname 00:08:43.903 08:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:08:43.903 08:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 59147 00:08:43.903 killing process with pid 59147 00:08:43.903 08:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:08:43.903 08:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:08:43.903 08:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # echo 'killing process with pid 59147' 00:08:43.903 08:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # kill 59147 00:08:43.903 08:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@975 -- # wait 59147 00:08:49.189 08:40:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59168 00:08:49.189 08:40:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' -z 59168 ']' 00:08:49.189 08:40:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # kill -0 59168 00:08:49.189 08:40:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # uname 00:08:49.189 08:40:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:08:49.189 08:40:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 59168 00:08:49.189 killing process with pid 59168 00:08:49.189 08:40:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:08:49.189 08:40:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:08:49.189 08:40:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # echo 'killing process with pid 59168' 00:08:49.189 08:40:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # kill 59168 00:08:49.189 08:40:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@975 -- # wait 59168 00:08:51.088 ************************************ 00:08:51.088 END TEST locking_app_on_unlocked_coremask 00:08:51.088 ************************************ 00:08:51.088 00:08:51.088 real 0m12.324s 00:08:51.088 user 0m12.639s 00:08:51.088 sys 0m1.831s 00:08:51.088 08:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # xtrace_disable 00:08:51.088 08:40:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:51.088 08:40:47 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:51.088 08:40:47 event.cpu_locks -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:08:51.088 08:40:47 event.cpu_locks -- common/autotest_common.sh@1108 -- # xtrace_disable 00:08:51.088 08:40:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:51.088 ************************************ 00:08:51.088 START TEST locking_app_on_locked_coremask 00:08:51.088 ************************************ 00:08:51.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.088 08:40:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # locking_app_on_locked_coremask 00:08:51.088 08:40:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59326 00:08:51.088 08:40:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59326 /var/tmp/spdk.sock 00:08:51.088 08:40:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # '[' -z 59326 ']' 00:08:51.088 08:40:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.088 08:40:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:51.088 08:40:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.088 08:40:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:51.088 08:40:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:51.088 08:40:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:51.346 [2024-11-27 08:40:47.932798] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:08:51.346 [2024-11-27 08:40:47.933015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59326 ] 00:08:51.605 [2024-11-27 08:40:48.125966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.605 [2024-11-27 08:40:48.280849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.542 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@865 -- # return 0 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59342 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59342 /var/tmp/spdk2.sock 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59342 /var/tmp/spdk2.sock 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59342 /var/tmp/spdk2.sock 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # '[' -z 59342 ']' 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:52.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:52.543 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:52.801 [2024-11-27 08:40:49.345203] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:08:52.801 [2024-11-27 08:40:49.345825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59342 ] 00:08:52.801 [2024-11-27 08:40:49.543778] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59326 has claimed it. 00:08:52.801 [2024-11-27 08:40:49.543887] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:53.368 ERROR: process (pid: 59342) is no longer running 00:08:53.368 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 847: kill: (59342) - No such process 00:08:53.368 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:53.368 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@865 -- # return 1 00:08:53.368 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:53.368 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:53.368 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:53.368 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:53.368 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59326 00:08:53.368 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59326 00:08:53.368 08:40:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:53.936 08:40:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59326 00:08:53.936 08:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' -z 59326 ']' 00:08:53.936 08:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # kill -0 59326 00:08:53.936 08:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # uname 00:08:53.936 08:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:08:53.936 08:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 59326 00:08:53.936 killing process with pid 59326 00:08:53.936 08:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:08:53.936 08:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:08:53.936 08:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # echo 'killing process with pid 59326' 00:08:53.936 08:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # kill 59326 00:08:53.936 08:40:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@975 -- # wait 59326 00:08:56.470 ************************************ 00:08:56.470 END TEST locking_app_on_locked_coremask 00:08:56.470 ************************************ 00:08:56.470 00:08:56.470 real 0m5.146s 00:08:56.470 user 0m5.356s 00:08:56.470 sys 0m1.044s 00:08:56.470 08:40:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # xtrace_disable 00:08:56.470 08:40:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:56.470 08:40:52 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:56.470 08:40:52 event.cpu_locks -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:08:56.470 08:40:52 event.cpu_locks -- common/autotest_common.sh@1108 -- # xtrace_disable 00:08:56.470 08:40:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:56.470 ************************************ 00:08:56.470 START TEST locking_overlapped_coremask 00:08:56.470 ************************************ 00:08:56.470 08:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # locking_overlapped_coremask 00:08:56.470 08:40:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59417 00:08:56.470 08:40:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:56.470 08:40:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59417 /var/tmp/spdk.sock 00:08:56.470 08:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # '[' -z 59417 ']' 00:08:56.470 08:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.470 08:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:56.470 08:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.470 08:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:56.470 08:40:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:56.470 [2024-11-27 08:40:53.118591] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:08:56.470 [2024-11-27 08:40:53.118809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59417 ] 00:08:56.730 [2024-11-27 08:40:53.306787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:56.730 [2024-11-27 08:40:53.472615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.730 [2024-11-27 08:40:53.473066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.730 [2024-11-27 08:40:53.473418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@865 -- # return 0 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59435 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59435 /var/tmp/spdk2.sock 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59435 /var/tmp/spdk2.sock 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:58.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59435 /var/tmp/spdk2.sock 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # '[' -z 59435 ']' 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:58.108 08:40:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:58.108 [2024-11-27 08:40:54.544536] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:08:58.108 [2024-11-27 08:40:54.544904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59435 ] 00:08:58.108 [2024-11-27 08:40:54.745139] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59417 has claimed it. 00:08:58.108 [2024-11-27 08:40:54.745245] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:58.675 ERROR: process (pid: 59435) is no longer running 00:08:58.675 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 847: kill: (59435) - No such process 00:08:58.675 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:58.675 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@865 -- # return 1 00:08:58.675 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:58.675 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:58.675 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59417 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' -z 59417 ']' 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # kill -0 59417 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # uname 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 59417 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # echo 'killing process with pid 59417' 00:08:58.676 killing process with pid 59417 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # kill 59417 00:08:58.676 08:40:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@975 -- # wait 59417 00:09:01.208 00:09:01.208 real 0m4.708s 00:09:01.208 user 0m12.548s 00:09:01.208 sys 0m0.886s 00:09:01.209 08:40:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:01.209 08:40:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:01.209 ************************************ 00:09:01.209 END TEST locking_overlapped_coremask 00:09:01.209 ************************************ 00:09:01.209 08:40:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:01.209 08:40:57 event.cpu_locks -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:09:01.209 08:40:57 event.cpu_locks -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:01.209 08:40:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:01.209 ************************************ 00:09:01.209 START TEST locking_overlapped_coremask_via_rpc 00:09:01.209 ************************************ 00:09:01.209 08:40:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # locking_overlapped_coremask_via_rpc 00:09:01.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.209 08:40:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59504 00:09:01.209 08:40:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59504 /var/tmp/spdk.sock 00:09:01.209 08:40:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:01.209 08:40:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # '[' -z 59504 ']' 00:09:01.209 08:40:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.209 08:40:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:01.209 08:40:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.209 08:40:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:01.209 08:40:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.209 [2024-11-27 08:40:57.867611] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:09:01.209 [2024-11-27 08:40:57.868016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59504 ] 00:09:01.467 [2024-11-27 08:40:58.045448] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:01.467 [2024-11-27 08:40:58.045830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:01.467 [2024-11-27 08:40:58.205749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.467 [2024-11-27 08:40:58.205876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.467 [2024-11-27 08:40:58.205887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.404 08:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:02.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:02.404 08:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@865 -- # return 0 00:09:02.404 08:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:02.404 08:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59528 00:09:02.404 08:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59528 /var/tmp/spdk2.sock 00:09:02.404 08:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # '[' -z 59528 ']' 00:09:02.404 08:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:02.404 08:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:02.405 08:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:02.405 08:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:02.405 08:40:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.663 [2024-11-27 08:40:59.281915] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:09:02.663 [2024-11-27 08:40:59.282491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59528 ] 00:09:02.922 [2024-11-27 08:40:59.489472] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:02.922 [2024-11-27 08:40:59.489569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:03.182 [2024-11-27 08:40:59.778926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.182 [2024-11-27 08:40:59.778989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:03.182 [2024-11-27 08:40:59.778970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@865 -- # return 0 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.714 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 [2024-11-27 08:41:02.169723] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59504 has claimed it. 00:09:05.715 request: 00:09:05.715 { 00:09:05.715 "method": "framework_enable_cpumask_locks", 00:09:05.715 "req_id": 1 00:09:05.715 } 00:09:05.715 Got JSON-RPC error response 00:09:05.715 response: 00:09:05.715 { 00:09:05.715 "code": -32603, 00:09:05.715 "message": "Failed to claim CPU core: 2" 00:09:05.715 } 00:09:05.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59504 /var/tmp/spdk.sock 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # '[' -z 59504 ']' 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@865 -- # return 0 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59528 /var/tmp/spdk2.sock 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # '[' -z 59528 ']' 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:05.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:05.715 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.283 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:06.283 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@865 -- # return 0 00:09:06.283 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:06.283 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:06.283 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:06.283 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:06.283 ************************************ 00:09:06.283 END TEST locking_overlapped_coremask_via_rpc 00:09:06.283 ************************************ 00:09:06.283 00:09:06.283 real 0m5.053s 00:09:06.283 user 0m1.930s 00:09:06.283 sys 0m0.273s 00:09:06.283 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:06.283 08:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.283 08:41:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:06.283 08:41:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59504 ]] 00:09:06.283 08:41:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59504 00:09:06.283 08:41:02 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' -z 59504 ']' 00:09:06.283 08:41:02 event.cpu_locks -- common/autotest_common.sh@955 -- # kill -0 59504 00:09:06.283 08:41:02 event.cpu_locks -- common/autotest_common.sh@956 -- # uname 00:09:06.283 08:41:02 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:09:06.283 08:41:02 event.cpu_locks -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 59504 00:09:06.283 killing process with pid 59504 00:09:06.283 08:41:02 event.cpu_locks -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:09:06.283 08:41:02 event.cpu_locks -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:09:06.283 08:41:02 event.cpu_locks -- common/autotest_common.sh@969 -- # echo 'killing process with pid 59504' 00:09:06.283 08:41:02 event.cpu_locks -- common/autotest_common.sh@970 -- # kill 59504 00:09:06.283 08:41:02 event.cpu_locks -- common/autotest_common.sh@975 -- # wait 59504 00:09:08.815 08:41:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59528 ]] 00:09:08.815 08:41:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59528 00:09:08.815 08:41:05 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' -z 59528 ']' 00:09:08.815 08:41:05 event.cpu_locks -- common/autotest_common.sh@955 -- # kill -0 59528 00:09:08.815 08:41:05 event.cpu_locks -- common/autotest_common.sh@956 -- # uname 00:09:08.815 08:41:05 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:09:08.815 08:41:05 event.cpu_locks -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 59528 00:09:08.815 killing process with pid 59528 00:09:08.815 08:41:05 event.cpu_locks -- common/autotest_common.sh@957 -- # process_name=reactor_2 00:09:08.815 08:41:05 event.cpu_locks -- common/autotest_common.sh@961 -- # '[' reactor_2 = sudo ']' 00:09:08.815 08:41:05 event.cpu_locks -- common/autotest_common.sh@969 -- # echo 'killing process with pid 59528' 00:09:08.815 08:41:05 event.cpu_locks -- common/autotest_common.sh@970 -- # kill 59528 00:09:08.815 08:41:05 event.cpu_locks -- common/autotest_common.sh@975 -- # wait 59528 00:09:11.348 08:41:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:11.348 08:41:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:11.348 08:41:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59504 ]] 00:09:11.348 08:41:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59504 00:09:11.348 08:41:07 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' -z 59504 ']' 00:09:11.348 08:41:07 event.cpu_locks -- common/autotest_common.sh@955 -- # kill -0 59504 00:09:11.348 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 955: kill: (59504) - No such process 00:09:11.348 08:41:07 event.cpu_locks -- common/autotest_common.sh@978 -- # echo 'Process with pid 59504 is not found' 00:09:11.348 Process with pid 59504 is not found 00:09:11.348 08:41:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59528 ]] 00:09:11.348 08:41:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59528 00:09:11.348 08:41:07 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' -z 59528 ']' 00:09:11.348 Process with pid 59528 is not found 00:09:11.348 08:41:07 event.cpu_locks -- common/autotest_common.sh@955 -- # kill -0 59528 00:09:11.348 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 955: kill: (59528) - No such process 00:09:11.348 08:41:07 event.cpu_locks -- common/autotest_common.sh@978 -- # echo 'Process with pid 59528 is not found' 00:09:11.348 08:41:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:11.348 ************************************ 00:09:11.348 END TEST cpu_locks 00:09:11.348 ************************************ 00:09:11.348 00:09:11.348 real 0m53.587s 00:09:11.348 user 1m31.919s 00:09:11.348 sys 0m8.901s 00:09:11.348 08:41:07 event.cpu_locks -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:11.348 08:41:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:11.348 ************************************ 00:09:11.348 END TEST event 00:09:11.348 ************************************ 00:09:11.348 00:09:11.348 real 1m26.989s 00:09:11.348 user 2m38.111s 00:09:11.348 sys 0m13.373s 00:09:11.348 08:41:07 event -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:11.348 08:41:07 event -- common/autotest_common.sh@10 -- # set +x 00:09:11.348 08:41:07 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:11.348 08:41:07 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:09:11.348 08:41:07 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:11.348 08:41:07 -- common/autotest_common.sh@10 -- # set +x 00:09:11.348 ************************************ 00:09:11.348 START TEST thread 00:09:11.348 ************************************ 00:09:11.348 08:41:07 thread -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:11.348 * Looking for test storage... 00:09:11.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:11.348 08:41:07 thread -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:11.348 08:41:07 thread -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:11.348 08:41:07 thread -- common/autotest_common.sh@1690 -- # lcov --version 00:09:11.348 08:41:08 thread -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:11.348 08:41:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.349 08:41:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.349 08:41:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.349 08:41:08 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.349 08:41:08 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.349 08:41:08 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.349 08:41:08 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.349 08:41:08 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.349 08:41:08 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.349 08:41:08 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.349 08:41:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.349 08:41:08 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:11.349 08:41:08 thread -- scripts/common.sh@345 -- # : 1 00:09:11.349 08:41:08 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.349 08:41:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.349 08:41:08 thread -- scripts/common.sh@365 -- # decimal 1 00:09:11.349 08:41:08 thread -- scripts/common.sh@353 -- # local d=1 00:09:11.349 08:41:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.349 08:41:08 thread -- scripts/common.sh@355 -- # echo 1 00:09:11.349 08:41:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.349 08:41:08 thread -- scripts/common.sh@366 -- # decimal 2 00:09:11.349 08:41:08 thread -- scripts/common.sh@353 -- # local d=2 00:09:11.349 08:41:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.349 08:41:08 thread -- scripts/common.sh@355 -- # echo 2 00:09:11.349 08:41:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.349 08:41:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.349 08:41:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.349 08:41:08 thread -- scripts/common.sh@368 -- # return 0 00:09:11.349 08:41:08 thread -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.349 08:41:08 thread -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:11.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.349 --rc genhtml_branch_coverage=1 00:09:11.349 --rc genhtml_function_coverage=1 00:09:11.349 --rc genhtml_legend=1 00:09:11.349 --rc geninfo_all_blocks=1 00:09:11.349 --rc geninfo_unexecuted_blocks=1 00:09:11.349 00:09:11.349 ' 00:09:11.349 08:41:08 thread -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:11.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.349 --rc genhtml_branch_coverage=1 00:09:11.349 --rc genhtml_function_coverage=1 00:09:11.349 --rc genhtml_legend=1 00:09:11.349 --rc geninfo_all_blocks=1 00:09:11.349 --rc geninfo_unexecuted_blocks=1 00:09:11.349 00:09:11.349 ' 00:09:11.349 08:41:08 thread -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:11.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.349 --rc genhtml_branch_coverage=1 00:09:11.349 --rc genhtml_function_coverage=1 00:09:11.349 --rc genhtml_legend=1 00:09:11.349 --rc geninfo_all_blocks=1 00:09:11.349 --rc geninfo_unexecuted_blocks=1 00:09:11.349 00:09:11.349 ' 00:09:11.349 08:41:08 thread -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:11.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.349 --rc genhtml_branch_coverage=1 00:09:11.349 --rc genhtml_function_coverage=1 00:09:11.349 --rc genhtml_legend=1 00:09:11.349 --rc geninfo_all_blocks=1 00:09:11.349 --rc geninfo_unexecuted_blocks=1 00:09:11.349 00:09:11.349 ' 00:09:11.349 08:41:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:11.349 08:41:08 thread -- common/autotest_common.sh@1102 -- # '[' 8 -le 1 ']' 00:09:11.349 08:41:08 thread -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:11.349 08:41:08 thread -- common/autotest_common.sh@10 -- # set +x 00:09:11.349 ************************************ 00:09:11.349 START TEST thread_poller_perf 00:09:11.349 ************************************ 00:09:11.349 08:41:08 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:11.349 [2024-11-27 08:41:08.100471] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:09:11.349 [2024-11-27 08:41:08.101121] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59723 ] 00:09:11.608 [2024-11-27 08:41:08.294142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.867 [2024-11-27 08:41:08.487546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.867 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:13.242 [2024-11-27T08:41:10.002Z] ====================================== 00:09:13.242 [2024-11-27T08:41:10.002Z] busy:2216085362 (cyc) 00:09:13.242 [2024-11-27T08:41:10.002Z] total_run_count: 295000 00:09:13.242 [2024-11-27T08:41:10.002Z] tsc_hz: 2200000000 (cyc) 00:09:13.242 [2024-11-27T08:41:10.002Z] ====================================== 00:09:13.242 [2024-11-27T08:41:10.002Z] poller_cost: 7512 (cyc), 3414 (nsec) 00:09:13.242 00:09:13.242 real 0m1.737s 00:09:13.242 user 0m1.507s 00:09:13.242 sys 0m0.116s 00:09:13.242 ************************************ 00:09:13.242 END TEST thread_poller_perf 00:09:13.242 ************************************ 00:09:13.242 08:41:09 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:13.242 08:41:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:13.242 08:41:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:13.242 08:41:09 thread -- common/autotest_common.sh@1102 -- # '[' 8 -le 1 ']' 00:09:13.242 08:41:09 thread -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:13.242 08:41:09 thread -- common/autotest_common.sh@10 -- # set +x 00:09:13.242 ************************************ 00:09:13.242 START TEST thread_poller_perf 00:09:13.242 ************************************ 00:09:13.242 08:41:09 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:13.242 [2024-11-27 08:41:09.888535] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:09:13.243 [2024-11-27 08:41:09.888720] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59765 ] 00:09:13.500 [2024-11-27 08:41:10.068536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.500 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:13.500 [2024-11-27 08:41:10.224372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.879 [2024-11-27T08:41:11.639Z] ====================================== 00:09:14.879 [2024-11-27T08:41:11.639Z] busy:2204284660 (cyc) 00:09:14.879 [2024-11-27T08:41:11.639Z] total_run_count: 3437000 00:09:14.879 [2024-11-27T08:41:11.639Z] tsc_hz: 2200000000 (cyc) 00:09:14.879 [2024-11-27T08:41:11.639Z] ====================================== 00:09:14.879 [2024-11-27T08:41:11.639Z] poller_cost: 641 (cyc), 291 (nsec) 00:09:14.879 00:09:14.879 real 0m1.624s 00:09:14.879 user 0m1.396s 00:09:14.879 sys 0m0.117s 00:09:14.879 08:41:11 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:14.879 08:41:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:14.879 ************************************ 00:09:14.879 END TEST thread_poller_perf 00:09:14.879 ************************************ 00:09:14.879 08:41:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:14.879 ************************************ 00:09:14.879 END TEST thread 00:09:14.879 ************************************ 00:09:14.879 00:09:14.879 real 0m3.662s 00:09:14.879 user 0m3.059s 00:09:14.879 sys 0m0.372s 00:09:14.879 08:41:11 thread -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:14.879 08:41:11 thread -- common/autotest_common.sh@10 -- # set +x 00:09:14.879 08:41:11 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:14.879 08:41:11 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:14.879 08:41:11 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:09:14.879 08:41:11 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:14.879 08:41:11 -- common/autotest_common.sh@10 -- # set +x 00:09:14.879 ************************************ 00:09:14.879 START TEST app_cmdline 00:09:14.879 ************************************ 00:09:14.879 08:41:11 app_cmdline -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:14.879 * Looking for test storage... 00:09:15.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:15.138 08:41:11 app_cmdline -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:15.138 08:41:11 app_cmdline -- common/autotest_common.sh@1690 -- # lcov --version 00:09:15.138 08:41:11 app_cmdline -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:15.138 08:41:11 app_cmdline -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.138 08:41:11 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:15.139 08:41:11 app_cmdline -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.139 08:41:11 app_cmdline -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:15.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.139 --rc genhtml_branch_coverage=1 00:09:15.139 --rc genhtml_function_coverage=1 00:09:15.139 --rc genhtml_legend=1 00:09:15.139 --rc geninfo_all_blocks=1 00:09:15.139 --rc geninfo_unexecuted_blocks=1 00:09:15.139 00:09:15.139 ' 00:09:15.139 08:41:11 app_cmdline -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:15.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.139 --rc genhtml_branch_coverage=1 00:09:15.139 --rc genhtml_function_coverage=1 00:09:15.139 --rc genhtml_legend=1 00:09:15.139 --rc geninfo_all_blocks=1 00:09:15.139 --rc geninfo_unexecuted_blocks=1 00:09:15.139 00:09:15.139 ' 00:09:15.139 08:41:11 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:15.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.139 --rc genhtml_branch_coverage=1 00:09:15.139 --rc genhtml_function_coverage=1 00:09:15.139 --rc genhtml_legend=1 00:09:15.139 --rc geninfo_all_blocks=1 00:09:15.139 --rc geninfo_unexecuted_blocks=1 00:09:15.139 00:09:15.139 ' 00:09:15.139 08:41:11 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:15.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.139 --rc genhtml_branch_coverage=1 00:09:15.139 --rc genhtml_function_coverage=1 00:09:15.139 --rc genhtml_legend=1 00:09:15.139 --rc geninfo_all_blocks=1 00:09:15.139 --rc geninfo_unexecuted_blocks=1 00:09:15.139 00:09:15.139 ' 00:09:15.139 08:41:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:15.139 08:41:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59854 00:09:15.139 08:41:11 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:15.139 08:41:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59854 00:09:15.139 08:41:11 app_cmdline -- common/autotest_common.sh@832 -- # '[' -z 59854 ']' 00:09:15.139 08:41:11 app_cmdline -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.139 08:41:11 app_cmdline -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:15.139 08:41:11 app_cmdline -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.139 08:41:11 app_cmdline -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:15.139 08:41:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:15.401 [2024-11-27 08:41:11.914364] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:09:15.401 [2024-11-27 08:41:11.914816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59854 ] 00:09:15.401 [2024-11-27 08:41:12.094862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.659 [2024-11-27 08:41:12.257575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.672 08:41:13 app_cmdline -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:16.672 08:41:13 app_cmdline -- common/autotest_common.sh@865 -- # return 0 00:09:16.672 08:41:13 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:16.930 { 00:09:16.930 "version": "SPDK v25.01-pre git sha1 df5e5465c", 00:09:16.930 "fields": { 00:09:16.930 "major": 25, 00:09:16.930 "minor": 1, 00:09:16.930 "patch": 0, 00:09:16.930 "suffix": "-pre", 00:09:16.930 "commit": "df5e5465c" 00:09:16.930 } 00:09:16.930 } 00:09:16.930 08:41:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:16.930 08:41:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:16.930 08:41:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:16.930 08:41:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:16.930 08:41:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:16.931 08:41:13 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.931 08:41:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:16.931 08:41:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:16.931 08:41:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:16.931 08:41:13 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.931 08:41:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:16.931 08:41:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:16.931 08:41:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:16.931 08:41:13 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:16.931 08:41:13 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:16.931 08:41:13 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.931 08:41:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.931 08:41:13 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.931 08:41:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.931 08:41:13 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.931 08:41:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.931 08:41:13 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.931 08:41:13 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:16.931 08:41:13 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:17.498 request: 00:09:17.498 { 00:09:17.498 "method": "env_dpdk_get_mem_stats", 00:09:17.498 "req_id": 1 00:09:17.498 } 00:09:17.498 Got JSON-RPC error response 00:09:17.498 response: 00:09:17.498 { 00:09:17.498 "code": -32601, 00:09:17.498 "message": "Method not found" 00:09:17.498 } 00:09:17.498 08:41:13 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:17.498 08:41:13 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:17.498 08:41:13 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:17.498 08:41:13 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:17.498 08:41:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59854 00:09:17.498 08:41:13 app_cmdline -- common/autotest_common.sh@951 -- # '[' -z 59854 ']' 00:09:17.498 08:41:13 app_cmdline -- common/autotest_common.sh@955 -- # kill -0 59854 00:09:17.498 08:41:13 app_cmdline -- common/autotest_common.sh@956 -- # uname 00:09:17.498 08:41:13 app_cmdline -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:09:17.498 08:41:13 app_cmdline -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 59854 00:09:17.498 killing process with pid 59854 00:09:17.498 08:41:14 app_cmdline -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:09:17.498 08:41:14 app_cmdline -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:09:17.498 08:41:14 app_cmdline -- common/autotest_common.sh@969 -- # echo 'killing process with pid 59854' 00:09:17.498 08:41:14 app_cmdline -- common/autotest_common.sh@970 -- # kill 59854 00:09:17.498 08:41:14 app_cmdline -- common/autotest_common.sh@975 -- # wait 59854 00:09:20.026 00:09:20.026 real 0m4.944s 00:09:20.026 user 0m5.332s 00:09:20.026 sys 0m0.807s 00:09:20.026 08:41:16 app_cmdline -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:20.026 ************************************ 00:09:20.026 END TEST app_cmdline 00:09:20.026 ************************************ 00:09:20.026 08:41:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:20.026 08:41:16 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:20.026 08:41:16 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:09:20.026 08:41:16 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:20.026 08:41:16 -- common/autotest_common.sh@10 -- # set +x 00:09:20.026 ************************************ 00:09:20.026 START TEST version 00:09:20.026 ************************************ 00:09:20.026 08:41:16 version -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:20.026 * Looking for test storage... 00:09:20.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:20.026 08:41:16 version -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:20.026 08:41:16 version -- common/autotest_common.sh@1690 -- # lcov --version 00:09:20.026 08:41:16 version -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:20.026 08:41:16 version -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:20.026 08:41:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.026 08:41:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.026 08:41:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.026 08:41:16 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.026 08:41:16 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.026 08:41:16 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.026 08:41:16 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.026 08:41:16 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.026 08:41:16 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.026 08:41:16 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.026 08:41:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.026 08:41:16 version -- scripts/common.sh@344 -- # case "$op" in 00:09:20.026 08:41:16 version -- scripts/common.sh@345 -- # : 1 00:09:20.026 08:41:16 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.026 08:41:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.026 08:41:16 version -- scripts/common.sh@365 -- # decimal 1 00:09:20.026 08:41:16 version -- scripts/common.sh@353 -- # local d=1 00:09:20.026 08:41:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.026 08:41:16 version -- scripts/common.sh@355 -- # echo 1 00:09:20.026 08:41:16 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.026 08:41:16 version -- scripts/common.sh@366 -- # decimal 2 00:09:20.026 08:41:16 version -- scripts/common.sh@353 -- # local d=2 00:09:20.026 08:41:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.026 08:41:16 version -- scripts/common.sh@355 -- # echo 2 00:09:20.026 08:41:16 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.026 08:41:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.026 08:41:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.026 08:41:16 version -- scripts/common.sh@368 -- # return 0 00:09:20.026 08:41:16 version -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.026 08:41:16 version -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:20.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.026 --rc genhtml_branch_coverage=1 00:09:20.026 --rc genhtml_function_coverage=1 00:09:20.026 --rc genhtml_legend=1 00:09:20.026 --rc geninfo_all_blocks=1 00:09:20.026 --rc geninfo_unexecuted_blocks=1 00:09:20.026 00:09:20.026 ' 00:09:20.026 08:41:16 version -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:20.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.026 --rc genhtml_branch_coverage=1 00:09:20.026 --rc genhtml_function_coverage=1 00:09:20.026 --rc genhtml_legend=1 00:09:20.026 --rc geninfo_all_blocks=1 00:09:20.026 --rc geninfo_unexecuted_blocks=1 00:09:20.026 00:09:20.026 ' 00:09:20.026 08:41:16 version -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:20.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.026 --rc genhtml_branch_coverage=1 00:09:20.026 --rc genhtml_function_coverage=1 00:09:20.026 --rc genhtml_legend=1 00:09:20.026 --rc geninfo_all_blocks=1 00:09:20.026 --rc geninfo_unexecuted_blocks=1 00:09:20.026 00:09:20.026 ' 00:09:20.026 08:41:16 version -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:20.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.026 --rc genhtml_branch_coverage=1 00:09:20.026 --rc genhtml_function_coverage=1 00:09:20.026 --rc genhtml_legend=1 00:09:20.026 --rc geninfo_all_blocks=1 00:09:20.026 --rc geninfo_unexecuted_blocks=1 00:09:20.026 00:09:20.026 ' 00:09:20.026 08:41:16 version -- app/version.sh@17 -- # get_header_version major 00:09:20.026 08:41:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:20.026 08:41:16 version -- app/version.sh@14 -- # cut -f2 00:09:20.026 08:41:16 version -- app/version.sh@14 -- # tr -d '"' 00:09:20.026 08:41:16 version -- app/version.sh@17 -- # major=25 00:09:20.026 08:41:16 version -- app/version.sh@18 -- # get_header_version minor 00:09:20.026 08:41:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:20.026 08:41:16 version -- app/version.sh@14 -- # cut -f2 00:09:20.026 08:41:16 version -- app/version.sh@14 -- # tr -d '"' 00:09:20.026 08:41:16 version -- app/version.sh@18 -- # minor=1 00:09:20.026 08:41:16 version -- app/version.sh@19 -- # get_header_version patch 00:09:20.026 08:41:16 version -- app/version.sh@14 -- # cut -f2 00:09:20.026 08:41:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:20.026 08:41:16 version -- app/version.sh@14 -- # tr -d '"' 00:09:20.026 08:41:16 version -- app/version.sh@19 -- # patch=0 00:09:20.026 08:41:16 version -- app/version.sh@20 -- # get_header_version suffix 00:09:20.026 08:41:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:20.026 08:41:16 version -- app/version.sh@14 -- # tr -d '"' 00:09:20.026 08:41:16 version -- app/version.sh@14 -- # cut -f2 00:09:20.026 08:41:16 version -- app/version.sh@20 -- # suffix=-pre 00:09:20.027 08:41:16 version -- app/version.sh@22 -- # version=25.1 00:09:20.027 08:41:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:20.027 08:41:16 version -- app/version.sh@28 -- # version=25.1rc0 00:09:20.027 08:41:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:20.284 08:41:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:20.284 08:41:16 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:20.284 08:41:16 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:20.284 00:09:20.284 real 0m0.274s 00:09:20.284 user 0m0.175s 00:09:20.284 sys 0m0.127s 00:09:20.284 ************************************ 00:09:20.284 END TEST version 00:09:20.284 ************************************ 00:09:20.284 08:41:16 version -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:20.284 08:41:16 version -- common/autotest_common.sh@10 -- # set +x 00:09:20.284 08:41:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:20.284 08:41:16 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:09:20.284 08:41:16 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:20.284 08:41:16 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:09:20.284 08:41:16 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:20.284 08:41:16 -- common/autotest_common.sh@10 -- # set +x 00:09:20.284 ************************************ 00:09:20.284 START TEST bdev_raid 00:09:20.284 ************************************ 00:09:20.284 08:41:16 bdev_raid -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:20.284 * Looking for test storage... 00:09:20.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:20.284 08:41:16 bdev_raid -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:20.284 08:41:16 bdev_raid -- common/autotest_common.sh@1690 -- # lcov --version 00:09:20.284 08:41:16 bdev_raid -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:20.284 08:41:17 bdev_raid -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@345 -- # : 1 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:09:20.284 08:41:17 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.541 08:41:17 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:09:20.541 08:41:17 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:09:20.542 08:41:17 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.542 08:41:17 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:09:20.542 08:41:17 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.542 08:41:17 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.542 08:41:17 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.542 08:41:17 bdev_raid -- scripts/common.sh@368 -- # return 0 00:09:20.542 08:41:17 bdev_raid -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.542 08:41:17 bdev_raid -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:20.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.542 --rc genhtml_branch_coverage=1 00:09:20.542 --rc genhtml_function_coverage=1 00:09:20.542 --rc genhtml_legend=1 00:09:20.542 --rc geninfo_all_blocks=1 00:09:20.542 --rc geninfo_unexecuted_blocks=1 00:09:20.542 00:09:20.542 ' 00:09:20.542 08:41:17 bdev_raid -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:20.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.542 --rc genhtml_branch_coverage=1 00:09:20.542 --rc genhtml_function_coverage=1 00:09:20.542 --rc genhtml_legend=1 00:09:20.542 --rc geninfo_all_blocks=1 00:09:20.542 --rc geninfo_unexecuted_blocks=1 00:09:20.542 00:09:20.542 ' 00:09:20.542 08:41:17 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:20.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.542 --rc genhtml_branch_coverage=1 00:09:20.542 --rc genhtml_function_coverage=1 00:09:20.542 --rc genhtml_legend=1 00:09:20.542 --rc geninfo_all_blocks=1 00:09:20.542 --rc geninfo_unexecuted_blocks=1 00:09:20.542 00:09:20.542 ' 00:09:20.542 08:41:17 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:20.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.542 --rc genhtml_branch_coverage=1 00:09:20.542 --rc genhtml_function_coverage=1 00:09:20.542 --rc genhtml_legend=1 00:09:20.542 --rc geninfo_all_blocks=1 00:09:20.542 --rc geninfo_unexecuted_blocks=1 00:09:20.542 00:09:20.542 ' 00:09:20.542 08:41:17 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:20.542 08:41:17 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:20.542 08:41:17 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:09:20.542 08:41:17 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:09:20.542 08:41:17 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:09:20.542 08:41:17 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:09:20.542 08:41:17 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:09:20.542 08:41:17 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:09:20.542 08:41:17 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:20.542 08:41:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 ************************************ 00:09:20.542 START TEST raid1_resize_data_offset_test 00:09:20.542 ************************************ 00:09:20.542 08:41:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # raid_resize_data_offset_test 00:09:20.542 08:41:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60047 00:09:20.542 08:41:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:20.542 08:41:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60047' 00:09:20.542 Process raid pid: 60047 00:09:20.542 08:41:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60047 00:09:20.542 08:41:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@832 -- # '[' -z 60047 ']' 00:09:20.542 08:41:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.542 08:41:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:20.542 08:41:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.542 08:41:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:20.542 08:41:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 [2024-11-27 08:41:17.172413] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:09:20.542 [2024-11-27 08:41:17.172583] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.800 [2024-11-27 08:41:17.350481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.800 [2024-11-27 08:41:17.532645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.057 [2024-11-27 08:41:17.764992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.057 [2024-11-27 08:41:17.765047] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@865 -- # return 0 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.625 malloc0 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.625 malloc1 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.625 null0 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.625 [2024-11-27 08:41:18.335511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:09:21.625 [2024-11-27 08:41:18.338172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:21.625 [2024-11-27 08:41:18.338246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:09:21.625 [2024-11-27 08:41:18.338486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:21.625 [2024-11-27 08:41:18.338520] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:09:21.625 [2024-11-27 08:41:18.338876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:21.625 [2024-11-27 08:41:18.339265] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:21.625 [2024-11-27 08:41:18.339296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:21.625 [2024-11-27 08:41:18.339551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.625 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.884 08:41:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:09:21.884 08:41:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:09:21.884 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.884 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.884 [2024-11-27 08:41:18.399672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:09:21.884 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.884 08:41:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:09:21.884 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.884 08:41:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.452 malloc2 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.452 [2024-11-27 08:41:19.015883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:22.452 [2024-11-27 08:41:19.034811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.452 [2024-11-27 08:41:19.037937] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60047 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@951 -- # '[' -z 60047 ']' 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # kill -0 60047 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # uname 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 60047 00:09:22.452 killing process with pid 60047 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 60047' 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # kill 60047 00:09:22.452 08:41:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@975 -- # wait 60047 00:09:22.452 [2024-11-27 08:41:19.129922] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.452 [2024-11-27 08:41:19.131804] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:09:22.452 [2024-11-27 08:41:19.131888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.452 [2024-11-27 08:41:19.131916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:09:22.452 [2024-11-27 08:41:19.166129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.452 [2024-11-27 08:41:19.166664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.452 [2024-11-27 08:41:19.166699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:24.357 [2024-11-27 08:41:20.947956] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.735 ************************************ 00:09:25.735 END TEST raid1_resize_data_offset_test 00:09:25.735 ************************************ 00:09:25.735 08:41:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:09:25.735 00:09:25.735 real 0m5.050s 00:09:25.735 user 0m4.861s 00:09:25.735 sys 0m0.755s 00:09:25.735 08:41:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:25.735 08:41:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.735 08:41:22 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:09:25.735 08:41:22 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:09:25.735 08:41:22 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:25.735 08:41:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.735 ************************************ 00:09:25.735 START TEST raid0_resize_superblock_test 00:09:25.735 ************************************ 00:09:25.735 08:41:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # raid_resize_superblock_test 0 00:09:25.735 08:41:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:09:25.735 Process raid pid: 60136 00:09:25.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.735 08:41:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60136 00:09:25.735 08:41:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60136' 00:09:25.735 08:41:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60136 00:09:25.735 08:41:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:25.735 08:41:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@832 -- # '[' -z 60136 ']' 00:09:25.735 08:41:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.735 08:41:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:25.735 08:41:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.735 08:41:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:25.735 08:41:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.735 [2024-11-27 08:41:22.281783] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:09:25.735 [2024-11-27 08:41:22.282438] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.735 [2024-11-27 08:41:22.471486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.993 [2024-11-27 08:41:22.616504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.251 [2024-11-27 08:41:22.831899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.251 [2024-11-27 08:41:22.832302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.509 08:41:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:26.509 08:41:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@865 -- # return 0 00:09:26.509 08:41:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:26.509 08:41:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.509 08:41:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.447 malloc0 00:09:27.447 08:41:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.447 08:41:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:27.447 08:41:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.447 08:41:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.447 [2024-11-27 08:41:23.848177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:27.447 [2024-11-27 08:41:23.848277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.447 [2024-11-27 08:41:23.848318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:27.447 [2024-11-27 08:41:23.848394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.447 [2024-11-27 08:41:23.851580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.447 [2024-11-27 08:41:23.851631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:27.447 pt0 00:09:27.447 08:41:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.447 08:41:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:27.447 08:41:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.447 08:41:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.447 8ae0da27-f32f-421b-a9e7-546e2536a838 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.447 a363acce-33aa-4e7a-b007-a5c61c15162c 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.447 8f305ca9-4881-4e83-a895-f710bb05cfe0 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.447 [2024-11-27 08:41:24.044105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a363acce-33aa-4e7a-b007-a5c61c15162c is claimed 00:09:27.447 [2024-11-27 08:41:24.044219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8f305ca9-4881-4e83-a895-f710bb05cfe0 is claimed 00:09:27.447 [2024-11-27 08:41:24.044460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:27.447 [2024-11-27 08:41:24.044489] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:09:27.447 [2024-11-27 08:41:24.044863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:27.447 [2024-11-27 08:41:24.045130] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:27.447 [2024-11-27 08:41:24.045149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:27.447 [2024-11-27 08:41:24.045342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.447 [2024-11-27 08:41:24.160526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.447 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:27.706 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:27.706 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:09:27.706 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:27.706 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.706 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.706 [2024-11-27 08:41:24.212597] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:27.706 [2024-11-27 08:41:24.212647] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a363acce-33aa-4e7a-b007-a5c61c15162c' was resized: old size 131072, new size 204800 00:09:27.706 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.706 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:27.706 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.706 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.706 [2024-11-27 08:41:24.220240] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:27.706 [2024-11-27 08:41:24.220268] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8f305ca9-4881-4e83-a895-f710bb05cfe0' was resized: old size 131072, new size 204800 00:09:27.706 [2024-11-27 08:41:24.220305] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:09:27.706 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.706 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.707 [2024-11-27 08:41:24.340614] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.707 [2024-11-27 08:41:24.396309] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:27.707 [2024-11-27 08:41:24.396490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:27.707 [2024-11-27 08:41:24.396514] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.707 [2024-11-27 08:41:24.396542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:27.707 [2024-11-27 08:41:24.396717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.707 [2024-11-27 08:41:24.396787] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.707 [2024-11-27 08:41:24.396808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.707 [2024-11-27 08:41:24.404145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:27.707 [2024-11-27 08:41:24.404257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.707 [2024-11-27 08:41:24.404291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:27.707 [2024-11-27 08:41:24.404310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.707 [2024-11-27 08:41:24.407614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.707 [2024-11-27 08:41:24.407666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:27.707 pt0 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.707 [2024-11-27 08:41:24.410172] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a363acce-33aa-4e7a-b007-a5c61c15162c 00:09:27.707 [2024-11-27 08:41:24.410255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a363acce-33aa-4e7a-b007-a5c61c15162c is claimed 00:09:27.707 [2024-11-27 08:41:24.410464] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8f305ca9-4881-4e83-a895-f710bb05cfe0 00:09:27.707 [2024-11-27 08:41:24.410510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8f305ca9-4881-4e83-a895-f710bb05cfe0 is claimed 00:09:27.707 [2024-11-27 08:41:24.410700] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 8f305ca9-4881-4e83-a895-f710bb05cfe0 (2) smaller than existing raid bdev Raid (3) 00:09:27.707 [2024-11-27 08:41:24.410922] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev a363acce-33aa-4e7a-b007-a5c61c15162c: File exists 00:09:27.707 [2024-11-27 08:41:24.410995] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:27.707 [2024-11-27 08:41:24.411018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:09:27.707 [2024-11-27 08:41:24.411389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:27.707 [2024-11-27 08:41:24.411600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:27.707 [2024-11-27 08:41:24.411615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:09:27.707 [2024-11-27 08:41:24.411830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.707 [2024-11-27 08:41:24.424650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.707 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.966 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:27.966 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:27.966 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:09:27.966 08:41:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60136 00:09:27.966 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@951 -- # '[' -z 60136 ']' 00:09:27.966 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # kill -0 60136 00:09:27.966 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # uname 00:09:27.966 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:09:27.966 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 60136 00:09:27.966 killing process with pid 60136 00:09:27.966 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:09:27.966 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:09:27.966 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 60136' 00:09:27.966 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # kill 60136 00:09:27.966 [2024-11-27 08:41:24.511126] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.966 08:41:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@975 -- # wait 60136 00:09:27.966 [2024-11-27 08:41:24.511272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.966 [2024-11-27 08:41:24.511361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.966 [2024-11-27 08:41:24.511376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:09:29.337 [2024-11-27 08:41:25.903066] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.716 08:41:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:30.716 00:09:30.716 real 0m4.889s 00:09:30.716 user 0m5.093s 00:09:30.716 sys 0m0.770s 00:09:30.716 08:41:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:30.716 08:41:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.716 ************************************ 00:09:30.716 END TEST raid0_resize_superblock_test 00:09:30.716 ************************************ 00:09:30.716 08:41:27 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:09:30.716 08:41:27 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:09:30.716 08:41:27 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:30.716 08:41:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.716 ************************************ 00:09:30.716 START TEST raid1_resize_superblock_test 00:09:30.716 ************************************ 00:09:30.716 08:41:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # raid_resize_superblock_test 1 00:09:30.716 08:41:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:09:30.716 Process raid pid: 60235 00:09:30.716 08:41:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60235 00:09:30.716 08:41:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60235' 00:09:30.716 08:41:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60235 00:09:30.716 08:41:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@832 -- # '[' -z 60235 ']' 00:09:30.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.716 08:41:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.716 08:41:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:30.716 08:41:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:30.716 08:41:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.716 08:41:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:30.716 08:41:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.716 [2024-11-27 08:41:27.228780] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:09:30.716 [2024-11-27 08:41:27.229313] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.716 [2024-11-27 08:41:27.410228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.975 [2024-11-27 08:41:27.569733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.233 [2024-11-27 08:41:27.814601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.233 [2024-11-27 08:41:27.814939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.800 08:41:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:31.800 08:41:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@865 -- # return 0 00:09:31.800 08:41:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:31.800 08:41:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.800 08:41:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.368 malloc0 00:09:32.368 08:41:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.368 08:41:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:32.368 08:41:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.368 08:41:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.368 [2024-11-27 08:41:28.937092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:32.368 [2024-11-27 08:41:28.937193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.368 [2024-11-27 08:41:28.937224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:32.368 [2024-11-27 08:41:28.937241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.368 [2024-11-27 08:41:28.940308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.369 [2024-11-27 08:41:28.940383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:32.369 pt0 00:09:32.369 08:41:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.369 08:41:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:32.369 08:41:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.369 08:41:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.369 39d94c0b-eb81-4563-890b-0760a8c7185d 00:09:32.369 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.369 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:32.369 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.369 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.369 08286a72-f177-4db9-bdac-e050248461ae 00:09:32.369 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.369 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:32.369 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.369 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.628 535334e6-c694-440f-bf78-b58916c22b06 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.628 [2024-11-27 08:41:29.142026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 08286a72-f177-4db9-bdac-e050248461ae is claimed 00:09:32.628 [2024-11-27 08:41:29.142191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 535334e6-c694-440f-bf78-b58916c22b06 is claimed 00:09:32.628 [2024-11-27 08:41:29.142418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:32.628 [2024-11-27 08:41:29.142462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:09:32.628 [2024-11-27 08:41:29.142817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:32.628 [2024-11-27 08:41:29.143281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:32.628 [2024-11-27 08:41:29.143306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:32.628 [2024-11-27 08:41:29.143561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.628 [2024-11-27 08:41:29.266446] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.628 [2024-11-27 08:41:29.314466] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:32.628 [2024-11-27 08:41:29.314636] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '08286a72-f177-4db9-bdac-e050248461ae' was resized: old size 131072, new size 204800 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.628 [2024-11-27 08:41:29.322227] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:32.628 [2024-11-27 08:41:29.322397] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '535334e6-c694-440f-bf78-b58916c22b06' was resized: old size 131072, new size 204800 00:09:32.628 [2024-11-27 08:41:29.322455] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:32.628 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.888 [2024-11-27 08:41:29.446433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.888 [2024-11-27 08:41:29.498230] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:32.888 [2024-11-27 08:41:29.498366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:32.888 [2024-11-27 08:41:29.498416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:32.888 [2024-11-27 08:41:29.498662] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.888 [2024-11-27 08:41:29.498987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.888 [2024-11-27 08:41:29.499105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.888 [2024-11-27 08:41:29.499137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.888 [2024-11-27 08:41:29.506054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:32.888 [2024-11-27 08:41:29.506153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.888 [2024-11-27 08:41:29.506187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:32.888 [2024-11-27 08:41:29.506208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.888 [2024-11-27 08:41:29.509533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.888 [2024-11-27 08:41:29.509583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:32.888 pt0 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.888 [2024-11-27 08:41:29.512034] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 08286a72-f177-4db9-bdac-e050248461ae 00:09:32.888 [2024-11-27 08:41:29.512150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 08286a72-f177-4db9-bdac-e050248461ae is claimed 00:09:32.888 [2024-11-27 08:41:29.512297] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 535334e6-c694-440f-bf78-b58916c22b06 00:09:32.888 [2024-11-27 08:41:29.512460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 535334e6-c694-440f-bf78-b58916c22b06 is claimed 00:09:32.888 [2024-11-27 08:41:29.512632] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 535334e6-c694-440f-bf78-b58916c22b06 (2) smaller than existing raid bdev Raid (3) 00:09:32.888 [2024-11-27 08:41:29.512665] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 08286a72-f177-4db9-bdac-e050248461ae: File exists 00:09:32.888 [2024-11-27 08:41:29.512721] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:32.888 [2024-11-27 08:41:29.512741] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:32.888 [2024-11-27 08:41:29.513078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:32.888 [2024-11-27 08:41:29.513285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:32.888 [2024-11-27 08:41:29.513307] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:09:32.888 [2024-11-27 08:41:29.513511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.888 [2024-11-27 08:41:29.526435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60235 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@951 -- # '[' -z 60235 ']' 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # kill -0 60235 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # uname 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 60235 00:09:32.888 killing process with pid 60235 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 60235' 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # kill 60235 00:09:32.888 [2024-11-27 08:41:29.606060] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:32.888 08:41:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@975 -- # wait 60235 00:09:32.888 [2024-11-27 08:41:29.606224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.888 [2024-11-27 08:41:29.606311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.888 [2024-11-27 08:41:29.606326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:09:34.794 [2024-11-27 08:41:31.046981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:35.743 08:41:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:35.743 00:09:35.743 real 0m5.072s 00:09:35.743 user 0m5.334s 00:09:35.743 sys 0m0.789s 00:09:35.743 08:41:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:35.743 ************************************ 00:09:35.743 END TEST raid1_resize_superblock_test 00:09:35.743 ************************************ 00:09:35.743 08:41:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.743 08:41:32 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:09:35.743 08:41:32 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:09:35.743 08:41:32 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:09:35.743 08:41:32 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:09:35.743 08:41:32 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:09:35.743 08:41:32 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:35.743 08:41:32 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:09:35.743 08:41:32 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:35.743 08:41:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:35.743 ************************************ 00:09:35.743 START TEST raid_function_test_raid0 00:09:35.743 ************************************ 00:09:35.743 08:41:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # raid_function_test raid0 00:09:35.743 08:41:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:09:35.743 08:41:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:35.743 08:41:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:35.743 Process raid pid: 60343 00:09:35.743 08:41:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60343 00:09:35.743 08:41:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:35.743 08:41:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60343' 00:09:35.743 08:41:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60343 00:09:35.743 08:41:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@832 -- # '[' -z 60343 ']' 00:09:35.743 08:41:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.743 08:41:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:35.743 08:41:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.743 08:41:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:35.743 08:41:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:35.743 [2024-11-27 08:41:32.399895] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:09:35.743 [2024-11-27 08:41:32.401839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.003 [2024-11-27 08:41:32.601135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.003 [2024-11-27 08:41:32.751976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.263 [2024-11-27 08:41:32.981922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.263 [2024-11-27 08:41:32.981992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@865 -- # return 0 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:36.832 Base_1 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:36.832 Base_2 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:36.832 [2024-11-27 08:41:33.483668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:36.832 [2024-11-27 08:41:33.486256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:36.832 [2024-11-27 08:41:33.486391] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:36.832 [2024-11-27 08:41:33.486414] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:36.832 [2024-11-27 08:41:33.486773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:36.832 [2024-11-27 08:41:33.486977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:36.832 [2024-11-27 08:41:33.487001] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:09:36.832 [2024-11-27 08:41:33.487216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:36.832 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:37.402 [2024-11-27 08:41:33.864001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:37.402 /dev/nbd0 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local i 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # break 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:37.402 1+0 records in 00:09:37.402 1+0 records out 00:09:37.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048423 s, 8.5 MB/s 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # size=4096 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # return 0 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:37.402 08:41:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:37.727 { 00:09:37.727 "nbd_device": "/dev/nbd0", 00:09:37.727 "bdev_name": "raid" 00:09:37.727 } 00:09:37.727 ]' 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:37.727 { 00:09:37.727 "nbd_device": "/dev/nbd0", 00:09:37.727 "bdev_name": "raid" 00:09:37.727 } 00:09:37.727 ]' 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:37.727 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:37.728 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:37.728 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:37.728 4096+0 records in 00:09:37.728 4096+0 records out 00:09:37.728 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0329723 s, 63.6 MB/s 00:09:37.728 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:38.001 4096+0 records in 00:09:38.001 4096+0 records out 00:09:38.001 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.348659 s, 6.0 MB/s 00:09:38.001 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:38.001 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:38.260 128+0 records in 00:09:38.260 128+0 records out 00:09:38.260 65536 bytes (66 kB, 64 KiB) copied, 0.00109946 s, 59.6 MB/s 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:38.260 2035+0 records in 00:09:38.260 2035+0 records out 00:09:38.260 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0147616 s, 70.6 MB/s 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:38.260 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:38.261 456+0 records in 00:09:38.261 456+0 records out 00:09:38.261 233472 bytes (233 kB, 228 KiB) copied, 0.00364369 s, 64.1 MB/s 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.261 08:41:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:38.520 [2024-11-27 08:41:35.179816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.520 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:38.520 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:38.520 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:38.520 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.520 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.520 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:38.520 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:38.520 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.520 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:38.520 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:38.520 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:38.779 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:38.779 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:38.779 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60343 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@951 -- # '[' -z 60343 ']' 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # kill -0 60343 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # uname 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 60343 00:09:39.068 killing process with pid 60343 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # echo 'killing process with pid 60343' 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # kill 60343 00:09:39.068 [2024-11-27 08:41:35.625156] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:39.068 08:41:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@975 -- # wait 60343 00:09:39.068 [2024-11-27 08:41:35.625303] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.068 [2024-11-27 08:41:35.625392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.068 [2024-11-27 08:41:35.625420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:39.068 [2024-11-27 08:41:35.818839] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:40.446 08:41:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:09:40.446 00:09:40.446 real 0m4.606s 00:09:40.446 user 0m5.683s 00:09:40.446 sys 0m1.158s 00:09:40.446 08:41:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:40.446 ************************************ 00:09:40.446 END TEST raid_function_test_raid0 00:09:40.446 ************************************ 00:09:40.446 08:41:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:40.446 08:41:36 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:09:40.446 08:41:36 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:09:40.446 08:41:36 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:40.446 08:41:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:40.446 ************************************ 00:09:40.446 START TEST raid_function_test_concat 00:09:40.446 ************************************ 00:09:40.446 08:41:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # raid_function_test concat 00:09:40.446 08:41:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:09:40.446 08:41:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:40.446 08:41:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:40.446 Process raid pid: 60482 00:09:40.446 08:41:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60482 00:09:40.446 08:41:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:40.446 08:41:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60482' 00:09:40.446 08:41:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60482 00:09:40.446 08:41:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@832 -- # '[' -z 60482 ']' 00:09:40.446 08:41:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.446 08:41:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:40.446 08:41:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.446 08:41:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:40.446 08:41:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:40.446 [2024-11-27 08:41:37.045330] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:09:40.446 [2024-11-27 08:41:37.045915] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.705 [2024-11-27 08:41:37.242444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.705 [2024-11-27 08:41:37.409383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.964 [2024-11-27 08:41:37.644085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.964 [2024-11-27 08:41:37.644151] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@865 -- # return 0 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:41.533 Base_1 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:41.533 Base_2 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:41.533 [2024-11-27 08:41:38.147672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:41.533 [2024-11-27 08:41:38.150387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:41.533 [2024-11-27 08:41:38.150530] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:41.533 [2024-11-27 08:41:38.150557] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:41.533 [2024-11-27 08:41:38.150986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:41.533 [2024-11-27 08:41:38.151209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:41.533 [2024-11-27 08:41:38.151225] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:09:41.533 [2024-11-27 08:41:38.151461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:41.533 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:41.833 [2024-11-27 08:41:38.515869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:41.833 /dev/nbd0 00:09:41.833 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:41.833 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:41.833 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:09:41.833 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local i 00:09:41.833 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:41.833 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:41.833 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:09:41.833 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # break 00:09:41.833 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:09:41.833 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:09:41.833 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:41.833 1+0 records in 00:09:41.833 1+0 records out 00:09:41.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042526 s, 9.6 MB/s 00:09:41.833 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.097 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # size=4096 00:09:42.097 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.097 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:09:42.097 08:41:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # return 0 00:09:42.097 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.097 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:42.097 08:41:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:42.097 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:42.097 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:42.356 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:42.356 { 00:09:42.356 "nbd_device": "/dev/nbd0", 00:09:42.356 "bdev_name": "raid" 00:09:42.356 } 00:09:42.356 ]' 00:09:42.356 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:42.356 { 00:09:42.356 "nbd_device": "/dev/nbd0", 00:09:42.356 "bdev_name": "raid" 00:09:42.356 } 00:09:42.356 ]' 00:09:42.356 08:41:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:42.356 4096+0 records in 00:09:42.356 4096+0 records out 00:09:42.356 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0281833 s, 74.4 MB/s 00:09:42.356 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:42.950 4096+0 records in 00:09:42.950 4096+0 records out 00:09:42.950 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.348767 s, 6.0 MB/s 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:42.950 128+0 records in 00:09:42.950 128+0 records out 00:09:42.950 65536 bytes (66 kB, 64 KiB) copied, 0.00103787 s, 63.1 MB/s 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:42.950 2035+0 records in 00:09:42.950 2035+0 records out 00:09:42.950 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00887181 s, 117 MB/s 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:42.950 456+0 records in 00:09:42.950 456+0 records out 00:09:42.950 233472 bytes (233 kB, 228 KiB) copied, 0.00324719 s, 71.9 MB/s 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:42.950 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:43.209 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:43.209 [2024-11-27 08:41:39.815977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.209 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:43.209 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:43.209 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:43.209 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.209 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:43.209 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:43.209 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:43.209 08:41:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:43.209 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:43.209 08:41:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60482 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@951 -- # '[' -z 60482 ']' 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # kill -0 60482 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # uname 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 60482 00:09:43.470 killing process with pid 60482 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # echo 'killing process with pid 60482' 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # kill 60482 00:09:43.470 [2024-11-27 08:41:40.197665] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:43.470 08:41:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@975 -- # wait 60482 00:09:43.470 [2024-11-27 08:41:40.197816] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.470 [2024-11-27 08:41:40.197899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.470 [2024-11-27 08:41:40.197921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:43.729 [2024-11-27 08:41:40.393726] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:45.109 ************************************ 00:09:45.109 END TEST raid_function_test_concat 00:09:45.109 ************************************ 00:09:45.109 08:41:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:09:45.109 00:09:45.109 real 0m4.608s 00:09:45.109 user 0m5.652s 00:09:45.109 sys 0m1.102s 00:09:45.109 08:41:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:45.109 08:41:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:45.109 08:41:41 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:09:45.109 08:41:41 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:09:45.109 08:41:41 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:45.109 08:41:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.109 ************************************ 00:09:45.109 START TEST raid0_resize_test 00:09:45.109 ************************************ 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # raid_resize_test 0 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60612 00:09:45.109 Process raid pid: 60612 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60612' 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60612 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@832 -- # '[' -z 60612 ']' 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.109 08:41:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:45.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.110 08:41:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.110 08:41:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:45.110 08:41:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.110 [2024-11-27 08:41:41.702529] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:09:45.110 [2024-11-27 08:41:41.702783] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.368 [2024-11-27 08:41:41.877702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.368 [2024-11-27 08:41:42.036760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.627 [2024-11-27 08:41:42.274179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.627 [2024-11-27 08:41:42.274240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.207 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:46.207 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@865 -- # return 0 00:09:46.207 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:46.207 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.207 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.207 Base_1 00:09:46.207 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.207 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:46.207 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.207 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.207 Base_2 00:09:46.207 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.207 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:09:46.207 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:46.207 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.207 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.207 [2024-11-27 08:41:42.727178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:46.208 [2024-11-27 08:41:42.729906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:46.208 [2024-11-27 08:41:42.730006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:46.208 [2024-11-27 08:41:42.730025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:46.208 [2024-11-27 08:41:42.730379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:46.208 [2024-11-27 08:41:42.730549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:46.208 [2024-11-27 08:41:42.730566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:46.208 [2024-11-27 08:41:42.730738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.208 [2024-11-27 08:41:42.735158] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:46.208 [2024-11-27 08:41:42.735201] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:46.208 true 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:46.208 [2024-11-27 08:41:42.747426] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.208 [2024-11-27 08:41:42.799280] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:46.208 [2024-11-27 08:41:42.799327] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:46.208 [2024-11-27 08:41:42.799446] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:46.208 true 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:46.208 [2024-11-27 08:41:42.811512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60612 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # '[' -z 60612 ']' 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # kill -0 60612 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # uname 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 60612 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:09:46.208 killing process with pid 60612 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 60612' 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # kill 60612 00:09:46.208 [2024-11-27 08:41:42.889061] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.208 08:41:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@975 -- # wait 60612 00:09:46.208 [2024-11-27 08:41:42.889197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.208 [2024-11-27 08:41:42.889277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.208 [2024-11-27 08:41:42.889294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:46.208 [2024-11-27 08:41:42.906066] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.585 08:41:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:47.585 00:09:47.585 real 0m2.493s 00:09:47.585 user 0m2.709s 00:09:47.585 sys 0m0.420s 00:09:47.585 ************************************ 00:09:47.585 END TEST raid0_resize_test 00:09:47.585 ************************************ 00:09:47.585 08:41:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:47.585 08:41:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.585 08:41:44 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:09:47.585 08:41:44 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:09:47.585 08:41:44 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:47.585 08:41:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.585 ************************************ 00:09:47.585 START TEST raid1_resize_test 00:09:47.585 ************************************ 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # raid_resize_test 1 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:47.585 Process raid pid: 60674 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60674 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60674' 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60674 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@832 -- # '[' -z 60674 ']' 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:47.585 08:41:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.585 [2024-11-27 08:41:44.272360] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:09:47.585 [2024-11-27 08:41:44.272574] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.843 [2024-11-27 08:41:44.463403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.102 [2024-11-27 08:41:44.615006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.102 [2024-11-27 08:41:44.845282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.102 [2024-11-27 08:41:44.845346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.669 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@865 -- # return 0 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.670 Base_1 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.670 Base_2 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.670 [2024-11-27 08:41:45.270686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:48.670 [2024-11-27 08:41:45.273539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:48.670 [2024-11-27 08:41:45.273632] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:48.670 [2024-11-27 08:41:45.273654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:48.670 [2024-11-27 08:41:45.274027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:48.670 [2024-11-27 08:41:45.274234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:48.670 [2024-11-27 08:41:45.274253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:48.670 [2024-11-27 08:41:45.274717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.670 [2024-11-27 08:41:45.278736] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:48.670 [2024-11-27 08:41:45.278792] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:48.670 true 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:48.670 [2024-11-27 08:41:45.291042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.670 [2024-11-27 08:41:45.342870] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:48.670 [2024-11-27 08:41:45.343099] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:48.670 [2024-11-27 08:41:45.343180] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:09:48.670 true 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.670 [2024-11-27 08:41:45.355017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60674 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@951 -- # '[' -z 60674 ']' 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # kill -0 60674 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # uname 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:09:48.670 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 60674 00:09:48.929 killing process with pid 60674 00:09:48.929 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:09:48.929 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:09:48.929 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 60674' 00:09:48.929 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # kill 60674 00:09:48.929 [2024-11-27 08:41:45.435467] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.929 08:41:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@975 -- # wait 60674 00:09:48.929 [2024-11-27 08:41:45.435603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.929 [2024-11-27 08:41:45.436284] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.929 [2024-11-27 08:41:45.436473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:48.929 [2024-11-27 08:41:45.452863] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.866 ************************************ 00:09:49.866 END TEST raid1_resize_test 00:09:49.866 ************************************ 00:09:49.866 08:41:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:49.866 00:09:49.866 real 0m2.450s 00:09:49.866 user 0m2.624s 00:09:49.866 sys 0m0.457s 00:09:49.866 08:41:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:49.866 08:41:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.125 08:41:46 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:50.125 08:41:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:50.125 08:41:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:50.125 08:41:46 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:09:50.125 08:41:46 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:50.125 08:41:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:50.125 ************************************ 00:09:50.125 START TEST raid_state_function_test 00:09:50.125 ************************************ 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # raid_state_function_test raid0 2 false 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:50.125 Process raid pid: 60736 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60736 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60736' 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60736 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # '[' -z 60736 ']' 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:50.125 08:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.125 [2024-11-27 08:41:46.786693] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:09:50.126 [2024-11-27 08:41:46.786905] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.384 [2024-11-27 08:41:46.974562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.384 [2024-11-27 08:41:47.130591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.643 [2024-11-27 08:41:47.364102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.643 [2024-11-27 08:41:47.364171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@865 -- # return 0 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.212 [2024-11-27 08:41:47.727507] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:51.212 [2024-11-27 08:41:47.727618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:51.212 [2024-11-27 08:41:47.727638] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:51.212 [2024-11-27 08:41:47.727656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.212 "name": "Existed_Raid", 00:09:51.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.212 "strip_size_kb": 64, 00:09:51.212 "state": "configuring", 00:09:51.212 "raid_level": "raid0", 00:09:51.212 "superblock": false, 00:09:51.212 "num_base_bdevs": 2, 00:09:51.212 "num_base_bdevs_discovered": 0, 00:09:51.212 "num_base_bdevs_operational": 2, 00:09:51.212 "base_bdevs_list": [ 00:09:51.212 { 00:09:51.212 "name": "BaseBdev1", 00:09:51.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.212 "is_configured": false, 00:09:51.212 "data_offset": 0, 00:09:51.212 "data_size": 0 00:09:51.212 }, 00:09:51.212 { 00:09:51.212 "name": "BaseBdev2", 00:09:51.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.212 "is_configured": false, 00:09:51.212 "data_offset": 0, 00:09:51.212 "data_size": 0 00:09:51.212 } 00:09:51.212 ] 00:09:51.212 }' 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.212 08:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.782 [2024-11-27 08:41:48.251612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.782 [2024-11-27 08:41:48.251687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.782 [2024-11-27 08:41:48.263778] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:51.782 [2024-11-27 08:41:48.263981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:51.782 [2024-11-27 08:41:48.264109] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:51.782 [2024-11-27 08:41:48.264176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.782 [2024-11-27 08:41:48.316784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.782 BaseBdev1 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.782 [ 00:09:51.782 { 00:09:51.782 "name": "BaseBdev1", 00:09:51.782 "aliases": [ 00:09:51.782 "a404944f-efc4-4b23-a416-a98118a60345" 00:09:51.782 ], 00:09:51.782 "product_name": "Malloc disk", 00:09:51.782 "block_size": 512, 00:09:51.782 "num_blocks": 65536, 00:09:51.782 "uuid": "a404944f-efc4-4b23-a416-a98118a60345", 00:09:51.782 "assigned_rate_limits": { 00:09:51.782 "rw_ios_per_sec": 0, 00:09:51.782 "rw_mbytes_per_sec": 0, 00:09:51.782 "r_mbytes_per_sec": 0, 00:09:51.782 "w_mbytes_per_sec": 0 00:09:51.782 }, 00:09:51.782 "claimed": true, 00:09:51.782 "claim_type": "exclusive_write", 00:09:51.782 "zoned": false, 00:09:51.782 "supported_io_types": { 00:09:51.782 "read": true, 00:09:51.782 "write": true, 00:09:51.782 "unmap": true, 00:09:51.782 "flush": true, 00:09:51.782 "reset": true, 00:09:51.782 "nvme_admin": false, 00:09:51.782 "nvme_io": false, 00:09:51.782 "nvme_io_md": false, 00:09:51.782 "write_zeroes": true, 00:09:51.782 "zcopy": true, 00:09:51.782 "get_zone_info": false, 00:09:51.782 "zone_management": false, 00:09:51.782 "zone_append": false, 00:09:51.782 "compare": false, 00:09:51.782 "compare_and_write": false, 00:09:51.782 "abort": true, 00:09:51.782 "seek_hole": false, 00:09:51.782 "seek_data": false, 00:09:51.782 "copy": true, 00:09:51.782 "nvme_iov_md": false 00:09:51.782 }, 00:09:51.782 "memory_domains": [ 00:09:51.782 { 00:09:51.782 "dma_device_id": "system", 00:09:51.782 "dma_device_type": 1 00:09:51.782 }, 00:09:51.782 { 00:09:51.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.782 "dma_device_type": 2 00:09:51.782 } 00:09:51.782 ], 00:09:51.782 "driver_specific": {} 00:09:51.782 } 00:09:51.782 ] 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.782 "name": "Existed_Raid", 00:09:51.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.782 "strip_size_kb": 64, 00:09:51.782 "state": "configuring", 00:09:51.782 "raid_level": "raid0", 00:09:51.782 "superblock": false, 00:09:51.782 "num_base_bdevs": 2, 00:09:51.782 "num_base_bdevs_discovered": 1, 00:09:51.782 "num_base_bdevs_operational": 2, 00:09:51.782 "base_bdevs_list": [ 00:09:51.782 { 00:09:51.782 "name": "BaseBdev1", 00:09:51.782 "uuid": "a404944f-efc4-4b23-a416-a98118a60345", 00:09:51.782 "is_configured": true, 00:09:51.782 "data_offset": 0, 00:09:51.782 "data_size": 65536 00:09:51.782 }, 00:09:51.782 { 00:09:51.782 "name": "BaseBdev2", 00:09:51.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.782 "is_configured": false, 00:09:51.782 "data_offset": 0, 00:09:51.782 "data_size": 0 00:09:51.782 } 00:09:51.782 ] 00:09:51.782 }' 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.782 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.350 [2024-11-27 08:41:48.845003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:52.350 [2024-11-27 08:41:48.845144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.350 [2024-11-27 08:41:48.853010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.350 [2024-11-27 08:41:48.855856] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.350 [2024-11-27 08:41:48.855927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.350 "name": "Existed_Raid", 00:09:52.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.350 "strip_size_kb": 64, 00:09:52.350 "state": "configuring", 00:09:52.350 "raid_level": "raid0", 00:09:52.350 "superblock": false, 00:09:52.350 "num_base_bdevs": 2, 00:09:52.350 "num_base_bdevs_discovered": 1, 00:09:52.350 "num_base_bdevs_operational": 2, 00:09:52.350 "base_bdevs_list": [ 00:09:52.350 { 00:09:52.350 "name": "BaseBdev1", 00:09:52.350 "uuid": "a404944f-efc4-4b23-a416-a98118a60345", 00:09:52.350 "is_configured": true, 00:09:52.350 "data_offset": 0, 00:09:52.350 "data_size": 65536 00:09:52.350 }, 00:09:52.350 { 00:09:52.350 "name": "BaseBdev2", 00:09:52.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.350 "is_configured": false, 00:09:52.350 "data_offset": 0, 00:09:52.350 "data_size": 0 00:09:52.350 } 00:09:52.350 ] 00:09:52.350 }' 00:09:52.350 08:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.351 08:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.609 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:52.609 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.609 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.609 [2024-11-27 08:41:49.355673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.609 [2024-11-27 08:41:49.356064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:52.609 [2024-11-27 08:41:49.356093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:52.609 [2024-11-27 08:41:49.356493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:52.609 [2024-11-27 08:41:49.356738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:52.609 [2024-11-27 08:41:49.356763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:52.609 [2024-11-27 08:41:49.357142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.609 BaseBdev2 00:09:52.609 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.609 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:52.609 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:09:52.609 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:09:52.609 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:09:52.609 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:09:52.609 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:09:52.609 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:09:52.609 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.609 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.868 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.868 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:52.868 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.868 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.868 [ 00:09:52.868 { 00:09:52.868 "name": "BaseBdev2", 00:09:52.868 "aliases": [ 00:09:52.868 "31ec4b16-01d2-4a15-8061-bb2362c8da9a" 00:09:52.868 ], 00:09:52.868 "product_name": "Malloc disk", 00:09:52.868 "block_size": 512, 00:09:52.868 "num_blocks": 65536, 00:09:52.868 "uuid": "31ec4b16-01d2-4a15-8061-bb2362c8da9a", 00:09:52.868 "assigned_rate_limits": { 00:09:52.868 "rw_ios_per_sec": 0, 00:09:52.868 "rw_mbytes_per_sec": 0, 00:09:52.868 "r_mbytes_per_sec": 0, 00:09:52.868 "w_mbytes_per_sec": 0 00:09:52.868 }, 00:09:52.868 "claimed": true, 00:09:52.868 "claim_type": "exclusive_write", 00:09:52.868 "zoned": false, 00:09:52.868 "supported_io_types": { 00:09:52.868 "read": true, 00:09:52.868 "write": true, 00:09:52.868 "unmap": true, 00:09:52.868 "flush": true, 00:09:52.868 "reset": true, 00:09:52.868 "nvme_admin": false, 00:09:52.868 "nvme_io": false, 00:09:52.868 "nvme_io_md": false, 00:09:52.868 "write_zeroes": true, 00:09:52.868 "zcopy": true, 00:09:52.868 "get_zone_info": false, 00:09:52.868 "zone_management": false, 00:09:52.868 "zone_append": false, 00:09:52.868 "compare": false, 00:09:52.868 "compare_and_write": false, 00:09:52.868 "abort": true, 00:09:52.868 "seek_hole": false, 00:09:52.868 "seek_data": false, 00:09:52.868 "copy": true, 00:09:52.868 "nvme_iov_md": false 00:09:52.868 }, 00:09:52.868 "memory_domains": [ 00:09:52.868 { 00:09:52.868 "dma_device_id": "system", 00:09:52.868 "dma_device_type": 1 00:09:52.868 }, 00:09:52.868 { 00:09:52.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.868 "dma_device_type": 2 00:09:52.868 } 00:09:52.868 ], 00:09:52.868 "driver_specific": {} 00:09:52.868 } 00:09:52.868 ] 00:09:52.868 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.868 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:09:52.868 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:52.868 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.868 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.869 "name": "Existed_Raid", 00:09:52.869 "uuid": "7d8db147-04c1-43ea-a7ef-3135aac4dbcc", 00:09:52.869 "strip_size_kb": 64, 00:09:52.869 "state": "online", 00:09:52.869 "raid_level": "raid0", 00:09:52.869 "superblock": false, 00:09:52.869 "num_base_bdevs": 2, 00:09:52.869 "num_base_bdevs_discovered": 2, 00:09:52.869 "num_base_bdevs_operational": 2, 00:09:52.869 "base_bdevs_list": [ 00:09:52.869 { 00:09:52.869 "name": "BaseBdev1", 00:09:52.869 "uuid": "a404944f-efc4-4b23-a416-a98118a60345", 00:09:52.869 "is_configured": true, 00:09:52.869 "data_offset": 0, 00:09:52.869 "data_size": 65536 00:09:52.869 }, 00:09:52.869 { 00:09:52.869 "name": "BaseBdev2", 00:09:52.869 "uuid": "31ec4b16-01d2-4a15-8061-bb2362c8da9a", 00:09:52.869 "is_configured": true, 00:09:52.869 "data_offset": 0, 00:09:52.869 "data_size": 65536 00:09:52.869 } 00:09:52.869 ] 00:09:52.869 }' 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.869 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.439 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:53.439 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:53.439 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.439 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.439 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.439 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.439 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:53.439 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.439 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.439 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.439 [2024-11-27 08:41:49.932386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.439 08:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.439 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.439 "name": "Existed_Raid", 00:09:53.439 "aliases": [ 00:09:53.439 "7d8db147-04c1-43ea-a7ef-3135aac4dbcc" 00:09:53.439 ], 00:09:53.439 "product_name": "Raid Volume", 00:09:53.439 "block_size": 512, 00:09:53.439 "num_blocks": 131072, 00:09:53.439 "uuid": "7d8db147-04c1-43ea-a7ef-3135aac4dbcc", 00:09:53.439 "assigned_rate_limits": { 00:09:53.439 "rw_ios_per_sec": 0, 00:09:53.439 "rw_mbytes_per_sec": 0, 00:09:53.439 "r_mbytes_per_sec": 0, 00:09:53.439 "w_mbytes_per_sec": 0 00:09:53.439 }, 00:09:53.439 "claimed": false, 00:09:53.439 "zoned": false, 00:09:53.439 "supported_io_types": { 00:09:53.439 "read": true, 00:09:53.439 "write": true, 00:09:53.439 "unmap": true, 00:09:53.439 "flush": true, 00:09:53.439 "reset": true, 00:09:53.439 "nvme_admin": false, 00:09:53.439 "nvme_io": false, 00:09:53.439 "nvme_io_md": false, 00:09:53.439 "write_zeroes": true, 00:09:53.439 "zcopy": false, 00:09:53.439 "get_zone_info": false, 00:09:53.439 "zone_management": false, 00:09:53.439 "zone_append": false, 00:09:53.439 "compare": false, 00:09:53.439 "compare_and_write": false, 00:09:53.439 "abort": false, 00:09:53.439 "seek_hole": false, 00:09:53.439 "seek_data": false, 00:09:53.439 "copy": false, 00:09:53.439 "nvme_iov_md": false 00:09:53.439 }, 00:09:53.439 "memory_domains": [ 00:09:53.439 { 00:09:53.439 "dma_device_id": "system", 00:09:53.439 "dma_device_type": 1 00:09:53.439 }, 00:09:53.439 { 00:09:53.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.439 "dma_device_type": 2 00:09:53.440 }, 00:09:53.440 { 00:09:53.440 "dma_device_id": "system", 00:09:53.440 "dma_device_type": 1 00:09:53.440 }, 00:09:53.440 { 00:09:53.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.440 "dma_device_type": 2 00:09:53.440 } 00:09:53.440 ], 00:09:53.440 "driver_specific": { 00:09:53.440 "raid": { 00:09:53.440 "uuid": "7d8db147-04c1-43ea-a7ef-3135aac4dbcc", 00:09:53.440 "strip_size_kb": 64, 00:09:53.440 "state": "online", 00:09:53.440 "raid_level": "raid0", 00:09:53.440 "superblock": false, 00:09:53.440 "num_base_bdevs": 2, 00:09:53.440 "num_base_bdevs_discovered": 2, 00:09:53.440 "num_base_bdevs_operational": 2, 00:09:53.440 "base_bdevs_list": [ 00:09:53.440 { 00:09:53.440 "name": "BaseBdev1", 00:09:53.440 "uuid": "a404944f-efc4-4b23-a416-a98118a60345", 00:09:53.440 "is_configured": true, 00:09:53.440 "data_offset": 0, 00:09:53.440 "data_size": 65536 00:09:53.440 }, 00:09:53.440 { 00:09:53.440 "name": "BaseBdev2", 00:09:53.440 "uuid": "31ec4b16-01d2-4a15-8061-bb2362c8da9a", 00:09:53.440 "is_configured": true, 00:09:53.440 "data_offset": 0, 00:09:53.440 "data_size": 65536 00:09:53.440 } 00:09:53.440 ] 00:09:53.440 } 00:09:53.440 } 00:09:53.440 }' 00:09:53.440 08:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:53.440 BaseBdev2' 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.440 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.440 [2024-11-27 08:41:50.176117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.440 [2024-11-27 08:41:50.176189] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.440 [2024-11-27 08:41:50.176287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.742 "name": "Existed_Raid", 00:09:53.742 "uuid": "7d8db147-04c1-43ea-a7ef-3135aac4dbcc", 00:09:53.742 "strip_size_kb": 64, 00:09:53.742 "state": "offline", 00:09:53.742 "raid_level": "raid0", 00:09:53.742 "superblock": false, 00:09:53.742 "num_base_bdevs": 2, 00:09:53.742 "num_base_bdevs_discovered": 1, 00:09:53.742 "num_base_bdevs_operational": 1, 00:09:53.742 "base_bdevs_list": [ 00:09:53.742 { 00:09:53.742 "name": null, 00:09:53.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.742 "is_configured": false, 00:09:53.742 "data_offset": 0, 00:09:53.742 "data_size": 65536 00:09:53.742 }, 00:09:53.742 { 00:09:53.742 "name": "BaseBdev2", 00:09:53.742 "uuid": "31ec4b16-01d2-4a15-8061-bb2362c8da9a", 00:09:53.742 "is_configured": true, 00:09:53.742 "data_offset": 0, 00:09:53.742 "data_size": 65536 00:09:53.742 } 00:09:53.742 ] 00:09:53.742 }' 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.742 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.007 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:54.008 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.008 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.008 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:54.008 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.008 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.278 [2024-11-27 08:41:50.812990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.278 [2024-11-27 08:41:50.813077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60736 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' -z 60736 ']' 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # kill -0 60736 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # uname 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 60736 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:09:54.278 killing process with pid 60736 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 60736' 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # kill 60736 00:09:54.278 [2024-11-27 08:41:50.997394] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.278 08:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@975 -- # wait 60736 00:09:54.278 [2024-11-27 08:41:51.014111] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.655 08:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:55.655 00:09:55.655 real 0m5.510s 00:09:55.655 user 0m8.138s 00:09:55.655 sys 0m0.819s 00:09:55.655 08:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:55.655 ************************************ 00:09:55.655 END TEST raid_state_function_test 00:09:55.655 ************************************ 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.656 08:41:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:55.656 08:41:52 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:09:55.656 08:41:52 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:55.656 08:41:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.656 ************************************ 00:09:55.656 START TEST raid_state_function_test_sb 00:09:55.656 ************************************ 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # raid_state_function_test raid0 2 true 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60995 00:09:55.656 Process raid pid: 60995 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60995' 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60995 00:09:55.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # '[' -z 60995 ']' 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:55.656 08:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.656 [2024-11-27 08:41:52.343519] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:09:55.656 [2024-11-27 08:41:52.343723] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.914 [2024-11-27 08:41:52.528522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.172 [2024-11-27 08:41:52.681470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.172 [2024-11-27 08:41:52.912671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.172 [2024-11-27 08:41:52.912723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.739 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:56.739 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@865 -- # return 0 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.740 [2024-11-27 08:41:53.323711] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.740 [2024-11-27 08:41:53.323795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.740 [2024-11-27 08:41:53.323813] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.740 [2024-11-27 08:41:53.323830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.740 "name": "Existed_Raid", 00:09:56.740 "uuid": "0105137e-ccd9-4600-8e83-e042e28e24cd", 00:09:56.740 "strip_size_kb": 64, 00:09:56.740 "state": "configuring", 00:09:56.740 "raid_level": "raid0", 00:09:56.740 "superblock": true, 00:09:56.740 "num_base_bdevs": 2, 00:09:56.740 "num_base_bdevs_discovered": 0, 00:09:56.740 "num_base_bdevs_operational": 2, 00:09:56.740 "base_bdevs_list": [ 00:09:56.740 { 00:09:56.740 "name": "BaseBdev1", 00:09:56.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.740 "is_configured": false, 00:09:56.740 "data_offset": 0, 00:09:56.740 "data_size": 0 00:09:56.740 }, 00:09:56.740 { 00:09:56.740 "name": "BaseBdev2", 00:09:56.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.740 "is_configured": false, 00:09:56.740 "data_offset": 0, 00:09:56.740 "data_size": 0 00:09:56.740 } 00:09:56.740 ] 00:09:56.740 }' 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.740 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.307 [2024-11-27 08:41:53.839848] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.307 [2024-11-27 08:41:53.839909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.307 [2024-11-27 08:41:53.851757] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.307 [2024-11-27 08:41:53.851815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.307 [2024-11-27 08:41:53.851832] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.307 [2024-11-27 08:41:53.851852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.307 [2024-11-27 08:41:53.901395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.307 BaseBdev1 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:09:57.307 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.308 [ 00:09:57.308 { 00:09:57.308 "name": "BaseBdev1", 00:09:57.308 "aliases": [ 00:09:57.308 "de30d090-d722-4597-b6cb-23ed12bdd9f4" 00:09:57.308 ], 00:09:57.308 "product_name": "Malloc disk", 00:09:57.308 "block_size": 512, 00:09:57.308 "num_blocks": 65536, 00:09:57.308 "uuid": "de30d090-d722-4597-b6cb-23ed12bdd9f4", 00:09:57.308 "assigned_rate_limits": { 00:09:57.308 "rw_ios_per_sec": 0, 00:09:57.308 "rw_mbytes_per_sec": 0, 00:09:57.308 "r_mbytes_per_sec": 0, 00:09:57.308 "w_mbytes_per_sec": 0 00:09:57.308 }, 00:09:57.308 "claimed": true, 00:09:57.308 "claim_type": "exclusive_write", 00:09:57.308 "zoned": false, 00:09:57.308 "supported_io_types": { 00:09:57.308 "read": true, 00:09:57.308 "write": true, 00:09:57.308 "unmap": true, 00:09:57.308 "flush": true, 00:09:57.308 "reset": true, 00:09:57.308 "nvme_admin": false, 00:09:57.308 "nvme_io": false, 00:09:57.308 "nvme_io_md": false, 00:09:57.308 "write_zeroes": true, 00:09:57.308 "zcopy": true, 00:09:57.308 "get_zone_info": false, 00:09:57.308 "zone_management": false, 00:09:57.308 "zone_append": false, 00:09:57.308 "compare": false, 00:09:57.308 "compare_and_write": false, 00:09:57.308 "abort": true, 00:09:57.308 "seek_hole": false, 00:09:57.308 "seek_data": false, 00:09:57.308 "copy": true, 00:09:57.308 "nvme_iov_md": false 00:09:57.308 }, 00:09:57.308 "memory_domains": [ 00:09:57.308 { 00:09:57.308 "dma_device_id": "system", 00:09:57.308 "dma_device_type": 1 00:09:57.308 }, 00:09:57.308 { 00:09:57.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.308 "dma_device_type": 2 00:09:57.308 } 00:09:57.308 ], 00:09:57.308 "driver_specific": {} 00:09:57.308 } 00:09:57.308 ] 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.308 "name": "Existed_Raid", 00:09:57.308 "uuid": "1f7adc96-488e-4402-a430-2d380e2beafb", 00:09:57.308 "strip_size_kb": 64, 00:09:57.308 "state": "configuring", 00:09:57.308 "raid_level": "raid0", 00:09:57.308 "superblock": true, 00:09:57.308 "num_base_bdevs": 2, 00:09:57.308 "num_base_bdevs_discovered": 1, 00:09:57.308 "num_base_bdevs_operational": 2, 00:09:57.308 "base_bdevs_list": [ 00:09:57.308 { 00:09:57.308 "name": "BaseBdev1", 00:09:57.308 "uuid": "de30d090-d722-4597-b6cb-23ed12bdd9f4", 00:09:57.308 "is_configured": true, 00:09:57.308 "data_offset": 2048, 00:09:57.308 "data_size": 63488 00:09:57.308 }, 00:09:57.308 { 00:09:57.308 "name": "BaseBdev2", 00:09:57.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.308 "is_configured": false, 00:09:57.308 "data_offset": 0, 00:09:57.308 "data_size": 0 00:09:57.308 } 00:09:57.308 ] 00:09:57.308 }' 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.308 08:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.876 [2024-11-27 08:41:54.417609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.876 [2024-11-27 08:41:54.417707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.876 [2024-11-27 08:41:54.425703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.876 [2024-11-27 08:41:54.428410] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.876 [2024-11-27 08:41:54.428485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.876 "name": "Existed_Raid", 00:09:57.876 "uuid": "49355f93-9989-4e64-a394-c912ffd5fad4", 00:09:57.876 "strip_size_kb": 64, 00:09:57.876 "state": "configuring", 00:09:57.876 "raid_level": "raid0", 00:09:57.876 "superblock": true, 00:09:57.876 "num_base_bdevs": 2, 00:09:57.876 "num_base_bdevs_discovered": 1, 00:09:57.876 "num_base_bdevs_operational": 2, 00:09:57.876 "base_bdevs_list": [ 00:09:57.876 { 00:09:57.876 "name": "BaseBdev1", 00:09:57.876 "uuid": "de30d090-d722-4597-b6cb-23ed12bdd9f4", 00:09:57.876 "is_configured": true, 00:09:57.876 "data_offset": 2048, 00:09:57.876 "data_size": 63488 00:09:57.876 }, 00:09:57.876 { 00:09:57.876 "name": "BaseBdev2", 00:09:57.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.876 "is_configured": false, 00:09:57.876 "data_offset": 0, 00:09:57.876 "data_size": 0 00:09:57.876 } 00:09:57.876 ] 00:09:57.876 }' 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.876 08:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.465 08:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.465 08:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.465 08:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.465 [2024-11-27 08:41:55.004967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.465 [2024-11-27 08:41:55.005317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:58.465 [2024-11-27 08:41:55.005355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:58.465 BaseBdev2 00:09:58.465 [2024-11-27 08:41:55.005719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:58.465 [2024-11-27 08:41:55.005926] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:58.465 [2024-11-27 08:41:55.005950] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:58.465 [2024-11-27 08:41:55.006170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.465 [ 00:09:58.465 { 00:09:58.465 "name": "BaseBdev2", 00:09:58.465 "aliases": [ 00:09:58.465 "f4d0308a-9fdd-4c2e-8f21-be70bad980c1" 00:09:58.465 ], 00:09:58.465 "product_name": "Malloc disk", 00:09:58.465 "block_size": 512, 00:09:58.465 "num_blocks": 65536, 00:09:58.465 "uuid": "f4d0308a-9fdd-4c2e-8f21-be70bad980c1", 00:09:58.465 "assigned_rate_limits": { 00:09:58.465 "rw_ios_per_sec": 0, 00:09:58.465 "rw_mbytes_per_sec": 0, 00:09:58.465 "r_mbytes_per_sec": 0, 00:09:58.465 "w_mbytes_per_sec": 0 00:09:58.465 }, 00:09:58.465 "claimed": true, 00:09:58.465 "claim_type": "exclusive_write", 00:09:58.465 "zoned": false, 00:09:58.465 "supported_io_types": { 00:09:58.465 "read": true, 00:09:58.465 "write": true, 00:09:58.465 "unmap": true, 00:09:58.465 "flush": true, 00:09:58.465 "reset": true, 00:09:58.465 "nvme_admin": false, 00:09:58.465 "nvme_io": false, 00:09:58.465 "nvme_io_md": false, 00:09:58.465 "write_zeroes": true, 00:09:58.465 "zcopy": true, 00:09:58.465 "get_zone_info": false, 00:09:58.465 "zone_management": false, 00:09:58.465 "zone_append": false, 00:09:58.465 "compare": false, 00:09:58.465 "compare_and_write": false, 00:09:58.465 "abort": true, 00:09:58.465 "seek_hole": false, 00:09:58.465 "seek_data": false, 00:09:58.465 "copy": true, 00:09:58.465 "nvme_iov_md": false 00:09:58.465 }, 00:09:58.465 "memory_domains": [ 00:09:58.465 { 00:09:58.465 "dma_device_id": "system", 00:09:58.465 "dma_device_type": 1 00:09:58.465 }, 00:09:58.465 { 00:09:58.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.465 "dma_device_type": 2 00:09:58.465 } 00:09:58.465 ], 00:09:58.465 "driver_specific": {} 00:09:58.465 } 00:09:58.465 ] 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.465 "name": "Existed_Raid", 00:09:58.465 "uuid": "49355f93-9989-4e64-a394-c912ffd5fad4", 00:09:58.465 "strip_size_kb": 64, 00:09:58.465 "state": "online", 00:09:58.465 "raid_level": "raid0", 00:09:58.465 "superblock": true, 00:09:58.465 "num_base_bdevs": 2, 00:09:58.465 "num_base_bdevs_discovered": 2, 00:09:58.465 "num_base_bdevs_operational": 2, 00:09:58.465 "base_bdevs_list": [ 00:09:58.465 { 00:09:58.465 "name": "BaseBdev1", 00:09:58.465 "uuid": "de30d090-d722-4597-b6cb-23ed12bdd9f4", 00:09:58.465 "is_configured": true, 00:09:58.465 "data_offset": 2048, 00:09:58.465 "data_size": 63488 00:09:58.465 }, 00:09:58.465 { 00:09:58.465 "name": "BaseBdev2", 00:09:58.465 "uuid": "f4d0308a-9fdd-4c2e-8f21-be70bad980c1", 00:09:58.465 "is_configured": true, 00:09:58.465 "data_offset": 2048, 00:09:58.465 "data_size": 63488 00:09:58.465 } 00:09:58.465 ] 00:09:58.465 }' 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.465 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.032 [2024-11-27 08:41:55.557638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.032 "name": "Existed_Raid", 00:09:59.032 "aliases": [ 00:09:59.032 "49355f93-9989-4e64-a394-c912ffd5fad4" 00:09:59.032 ], 00:09:59.032 "product_name": "Raid Volume", 00:09:59.032 "block_size": 512, 00:09:59.032 "num_blocks": 126976, 00:09:59.032 "uuid": "49355f93-9989-4e64-a394-c912ffd5fad4", 00:09:59.032 "assigned_rate_limits": { 00:09:59.032 "rw_ios_per_sec": 0, 00:09:59.032 "rw_mbytes_per_sec": 0, 00:09:59.032 "r_mbytes_per_sec": 0, 00:09:59.032 "w_mbytes_per_sec": 0 00:09:59.032 }, 00:09:59.032 "claimed": false, 00:09:59.032 "zoned": false, 00:09:59.032 "supported_io_types": { 00:09:59.032 "read": true, 00:09:59.032 "write": true, 00:09:59.032 "unmap": true, 00:09:59.032 "flush": true, 00:09:59.032 "reset": true, 00:09:59.032 "nvme_admin": false, 00:09:59.032 "nvme_io": false, 00:09:59.032 "nvme_io_md": false, 00:09:59.032 "write_zeroes": true, 00:09:59.032 "zcopy": false, 00:09:59.032 "get_zone_info": false, 00:09:59.032 "zone_management": false, 00:09:59.032 "zone_append": false, 00:09:59.032 "compare": false, 00:09:59.032 "compare_and_write": false, 00:09:59.032 "abort": false, 00:09:59.032 "seek_hole": false, 00:09:59.032 "seek_data": false, 00:09:59.032 "copy": false, 00:09:59.032 "nvme_iov_md": false 00:09:59.032 }, 00:09:59.032 "memory_domains": [ 00:09:59.032 { 00:09:59.032 "dma_device_id": "system", 00:09:59.032 "dma_device_type": 1 00:09:59.032 }, 00:09:59.032 { 00:09:59.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.032 "dma_device_type": 2 00:09:59.032 }, 00:09:59.032 { 00:09:59.032 "dma_device_id": "system", 00:09:59.032 "dma_device_type": 1 00:09:59.032 }, 00:09:59.032 { 00:09:59.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.032 "dma_device_type": 2 00:09:59.032 } 00:09:59.032 ], 00:09:59.032 "driver_specific": { 00:09:59.032 "raid": { 00:09:59.032 "uuid": "49355f93-9989-4e64-a394-c912ffd5fad4", 00:09:59.032 "strip_size_kb": 64, 00:09:59.032 "state": "online", 00:09:59.032 "raid_level": "raid0", 00:09:59.032 "superblock": true, 00:09:59.032 "num_base_bdevs": 2, 00:09:59.032 "num_base_bdevs_discovered": 2, 00:09:59.032 "num_base_bdevs_operational": 2, 00:09:59.032 "base_bdevs_list": [ 00:09:59.032 { 00:09:59.032 "name": "BaseBdev1", 00:09:59.032 "uuid": "de30d090-d722-4597-b6cb-23ed12bdd9f4", 00:09:59.032 "is_configured": true, 00:09:59.032 "data_offset": 2048, 00:09:59.032 "data_size": 63488 00:09:59.032 }, 00:09:59.032 { 00:09:59.032 "name": "BaseBdev2", 00:09:59.032 "uuid": "f4d0308a-9fdd-4c2e-8f21-be70bad980c1", 00:09:59.032 "is_configured": true, 00:09:59.032 "data_offset": 2048, 00:09:59.032 "data_size": 63488 00:09:59.032 } 00:09:59.032 ] 00:09:59.032 } 00:09:59.032 } 00:09:59.032 }' 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:59.032 BaseBdev2' 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.032 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.291 [2024-11-27 08:41:55.817353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.291 [2024-11-27 08:41:55.817482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.291 [2024-11-27 08:41:55.817616] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.291 "name": "Existed_Raid", 00:09:59.291 "uuid": "49355f93-9989-4e64-a394-c912ffd5fad4", 00:09:59.291 "strip_size_kb": 64, 00:09:59.291 "state": "offline", 00:09:59.291 "raid_level": "raid0", 00:09:59.291 "superblock": true, 00:09:59.291 "num_base_bdevs": 2, 00:09:59.291 "num_base_bdevs_discovered": 1, 00:09:59.291 "num_base_bdevs_operational": 1, 00:09:59.291 "base_bdevs_list": [ 00:09:59.291 { 00:09:59.291 "name": null, 00:09:59.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.291 "is_configured": false, 00:09:59.291 "data_offset": 0, 00:09:59.291 "data_size": 63488 00:09:59.291 }, 00:09:59.291 { 00:09:59.291 "name": "BaseBdev2", 00:09:59.291 "uuid": "f4d0308a-9fdd-4c2e-8f21-be70bad980c1", 00:09:59.291 "is_configured": true, 00:09:59.291 "data_offset": 2048, 00:09:59.291 "data_size": 63488 00:09:59.291 } 00:09:59.291 ] 00:09:59.291 }' 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.291 08:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.858 [2024-11-27 08:41:56.492229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.858 [2024-11-27 08:41:56.492312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.858 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.117 08:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:00.117 08:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:00.117 08:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:00.117 08:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60995 00:10:00.117 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' -z 60995 ']' 00:10:00.117 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # kill -0 60995 00:10:00.117 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # uname 00:10:00.117 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:00.117 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 60995 00:10:00.117 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:10:00.117 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:10:00.117 killing process with pid 60995 00:10:00.117 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # echo 'killing process with pid 60995' 00:10:00.117 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # kill 60995 00:10:00.117 [2024-11-27 08:41:56.664361] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.117 08:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@975 -- # wait 60995 00:10:00.117 [2024-11-27 08:41:56.680846] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.490 08:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:01.490 00:10:01.490 real 0m5.632s 00:10:01.490 user 0m8.327s 00:10:01.490 sys 0m0.890s 00:10:01.490 08:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:01.490 08:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.490 ************************************ 00:10:01.490 END TEST raid_state_function_test_sb 00:10:01.490 ************************************ 00:10:01.490 08:41:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:01.490 08:41:57 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:10:01.490 08:41:57 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:01.490 08:41:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.490 ************************************ 00:10:01.490 START TEST raid_superblock_test 00:10:01.490 ************************************ 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # raid_superblock_test raid0 2 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61247 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61247 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # '[' -z 61247 ']' 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:01.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:01.490 08:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.490 [2024-11-27 08:41:58.021534] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:10:01.490 [2024-11-27 08:41:58.021737] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61247 ] 00:10:01.490 [2024-11-27 08:41:58.204980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.749 [2024-11-27 08:41:58.362407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.009 [2024-11-27 08:41:58.595809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.009 [2024-11-27 08:41:58.595932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@865 -- # return 0 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.589 malloc1 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.589 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.590 [2024-11-27 08:41:59.101035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:02.590 [2024-11-27 08:41:59.101154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.590 [2024-11-27 08:41:59.101224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:02.590 [2024-11-27 08:41:59.101241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.590 [2024-11-27 08:41:59.104323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.590 [2024-11-27 08:41:59.104383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:02.590 pt1 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.590 malloc2 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.590 [2024-11-27 08:41:59.162294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:02.590 [2024-11-27 08:41:59.162396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.590 [2024-11-27 08:41:59.162446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:02.590 [2024-11-27 08:41:59.162478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.590 [2024-11-27 08:41:59.165565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.590 [2024-11-27 08:41:59.165610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:02.590 pt2 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.590 [2024-11-27 08:41:59.174474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:02.590 [2024-11-27 08:41:59.177250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:02.590 [2024-11-27 08:41:59.177502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:02.590 [2024-11-27 08:41:59.177533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:02.590 [2024-11-27 08:41:59.177859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:02.590 [2024-11-27 08:41:59.178079] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:02.590 [2024-11-27 08:41:59.178122] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:02.590 [2024-11-27 08:41:59.178382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.590 "name": "raid_bdev1", 00:10:02.590 "uuid": "0a5c7f50-259d-4180-b1aa-e05f08b91642", 00:10:02.590 "strip_size_kb": 64, 00:10:02.590 "state": "online", 00:10:02.590 "raid_level": "raid0", 00:10:02.590 "superblock": true, 00:10:02.590 "num_base_bdevs": 2, 00:10:02.590 "num_base_bdevs_discovered": 2, 00:10:02.590 "num_base_bdevs_operational": 2, 00:10:02.590 "base_bdevs_list": [ 00:10:02.590 { 00:10:02.590 "name": "pt1", 00:10:02.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.590 "is_configured": true, 00:10:02.590 "data_offset": 2048, 00:10:02.590 "data_size": 63488 00:10:02.590 }, 00:10:02.590 { 00:10:02.590 "name": "pt2", 00:10:02.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.590 "is_configured": true, 00:10:02.590 "data_offset": 2048, 00:10:02.590 "data_size": 63488 00:10:02.590 } 00:10:02.590 ] 00:10:02.590 }' 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.590 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.159 [2024-11-27 08:41:59.719143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.159 "name": "raid_bdev1", 00:10:03.159 "aliases": [ 00:10:03.159 "0a5c7f50-259d-4180-b1aa-e05f08b91642" 00:10:03.159 ], 00:10:03.159 "product_name": "Raid Volume", 00:10:03.159 "block_size": 512, 00:10:03.159 "num_blocks": 126976, 00:10:03.159 "uuid": "0a5c7f50-259d-4180-b1aa-e05f08b91642", 00:10:03.159 "assigned_rate_limits": { 00:10:03.159 "rw_ios_per_sec": 0, 00:10:03.159 "rw_mbytes_per_sec": 0, 00:10:03.159 "r_mbytes_per_sec": 0, 00:10:03.159 "w_mbytes_per_sec": 0 00:10:03.159 }, 00:10:03.159 "claimed": false, 00:10:03.159 "zoned": false, 00:10:03.159 "supported_io_types": { 00:10:03.159 "read": true, 00:10:03.159 "write": true, 00:10:03.159 "unmap": true, 00:10:03.159 "flush": true, 00:10:03.159 "reset": true, 00:10:03.159 "nvme_admin": false, 00:10:03.159 "nvme_io": false, 00:10:03.159 "nvme_io_md": false, 00:10:03.159 "write_zeroes": true, 00:10:03.159 "zcopy": false, 00:10:03.159 "get_zone_info": false, 00:10:03.159 "zone_management": false, 00:10:03.159 "zone_append": false, 00:10:03.159 "compare": false, 00:10:03.159 "compare_and_write": false, 00:10:03.159 "abort": false, 00:10:03.159 "seek_hole": false, 00:10:03.159 "seek_data": false, 00:10:03.159 "copy": false, 00:10:03.159 "nvme_iov_md": false 00:10:03.159 }, 00:10:03.159 "memory_domains": [ 00:10:03.159 { 00:10:03.159 "dma_device_id": "system", 00:10:03.159 "dma_device_type": 1 00:10:03.159 }, 00:10:03.159 { 00:10:03.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.159 "dma_device_type": 2 00:10:03.159 }, 00:10:03.159 { 00:10:03.159 "dma_device_id": "system", 00:10:03.159 "dma_device_type": 1 00:10:03.159 }, 00:10:03.159 { 00:10:03.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.159 "dma_device_type": 2 00:10:03.159 } 00:10:03.159 ], 00:10:03.159 "driver_specific": { 00:10:03.159 "raid": { 00:10:03.159 "uuid": "0a5c7f50-259d-4180-b1aa-e05f08b91642", 00:10:03.159 "strip_size_kb": 64, 00:10:03.159 "state": "online", 00:10:03.159 "raid_level": "raid0", 00:10:03.159 "superblock": true, 00:10:03.159 "num_base_bdevs": 2, 00:10:03.159 "num_base_bdevs_discovered": 2, 00:10:03.159 "num_base_bdevs_operational": 2, 00:10:03.159 "base_bdevs_list": [ 00:10:03.159 { 00:10:03.159 "name": "pt1", 00:10:03.159 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.159 "is_configured": true, 00:10:03.159 "data_offset": 2048, 00:10:03.159 "data_size": 63488 00:10:03.159 }, 00:10:03.159 { 00:10:03.159 "name": "pt2", 00:10:03.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.159 "is_configured": true, 00:10:03.159 "data_offset": 2048, 00:10:03.159 "data_size": 63488 00:10:03.159 } 00:10:03.159 ] 00:10:03.159 } 00:10:03.159 } 00:10:03.159 }' 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:03.159 pt2' 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.159 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.419 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.419 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.419 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.419 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:03.419 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.419 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.419 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.419 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.419 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.419 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.419 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.419 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.419 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.419 08:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:03.419 [2024-11-27 08:41:59.979471] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.419 08:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0a5c7f50-259d-4180-b1aa-e05f08b91642 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0a5c7f50-259d-4180-b1aa-e05f08b91642 ']' 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.419 [2024-11-27 08:42:00.030788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:03.419 [2024-11-27 08:42:00.030842] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.419 [2024-11-27 08:42:00.030982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.419 [2024-11-27 08:42:00.031054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.419 [2024-11-27 08:42:00.031077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:03.419 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:03.420 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:03.420 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:03.420 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:03.420 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:03.420 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:03.420 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:03.420 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.420 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.420 [2024-11-27 08:42:00.166976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:03.420 [2024-11-27 08:42:00.170161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:03.420 [2024-11-27 08:42:00.170262] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:03.420 [2024-11-27 08:42:00.170380] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:03.420 [2024-11-27 08:42:00.170411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:03.420 [2024-11-27 08:42:00.170430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:03.420 request: 00:10:03.420 { 00:10:03.420 "name": "raid_bdev1", 00:10:03.420 "raid_level": "raid0", 00:10:03.420 "base_bdevs": [ 00:10:03.420 "malloc1", 00:10:03.420 "malloc2" 00:10:03.420 ], 00:10:03.420 "strip_size_kb": 64, 00:10:03.420 "superblock": false, 00:10:03.420 "method": "bdev_raid_create", 00:10:03.420 "req_id": 1 00:10:03.420 } 00:10:03.420 Got JSON-RPC error response 00:10:03.420 response: 00:10:03.420 { 00:10:03.420 "code": -17, 00:10:03.420 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:03.420 } 00:10:03.420 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:03.420 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:03.420 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:03.420 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:03.420 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.679 [2024-11-27 08:42:00.227095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:03.679 [2024-11-27 08:42:00.227215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.679 [2024-11-27 08:42:00.227311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:03.679 [2024-11-27 08:42:00.227367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.679 [2024-11-27 08:42:00.231102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.679 [2024-11-27 08:42:00.231172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:03.679 [2024-11-27 08:42:00.231362] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:03.679 [2024-11-27 08:42:00.231514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:03.679 pt1 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.679 "name": "raid_bdev1", 00:10:03.679 "uuid": "0a5c7f50-259d-4180-b1aa-e05f08b91642", 00:10:03.679 "strip_size_kb": 64, 00:10:03.679 "state": "configuring", 00:10:03.679 "raid_level": "raid0", 00:10:03.679 "superblock": true, 00:10:03.679 "num_base_bdevs": 2, 00:10:03.679 "num_base_bdevs_discovered": 1, 00:10:03.679 "num_base_bdevs_operational": 2, 00:10:03.679 "base_bdevs_list": [ 00:10:03.679 { 00:10:03.679 "name": "pt1", 00:10:03.679 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.679 "is_configured": true, 00:10:03.679 "data_offset": 2048, 00:10:03.679 "data_size": 63488 00:10:03.679 }, 00:10:03.679 { 00:10:03.679 "name": null, 00:10:03.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.679 "is_configured": false, 00:10:03.679 "data_offset": 2048, 00:10:03.679 "data_size": 63488 00:10:03.679 } 00:10:03.679 ] 00:10:03.679 }' 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.679 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.247 [2024-11-27 08:42:00.743710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:04.247 [2024-11-27 08:42:00.743877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.247 [2024-11-27 08:42:00.743912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:04.247 [2024-11-27 08:42:00.743931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.247 [2024-11-27 08:42:00.744764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.247 [2024-11-27 08:42:00.744809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:04.247 [2024-11-27 08:42:00.744964] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:04.247 [2024-11-27 08:42:00.745036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:04.247 [2024-11-27 08:42:00.745191] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:04.247 [2024-11-27 08:42:00.745237] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:04.247 [2024-11-27 08:42:00.745618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:04.247 [2024-11-27 08:42:00.745884] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:04.247 [2024-11-27 08:42:00.745900] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:04.247 [2024-11-27 08:42:00.746077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.247 pt2 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.247 "name": "raid_bdev1", 00:10:04.247 "uuid": "0a5c7f50-259d-4180-b1aa-e05f08b91642", 00:10:04.247 "strip_size_kb": 64, 00:10:04.247 "state": "online", 00:10:04.247 "raid_level": "raid0", 00:10:04.247 "superblock": true, 00:10:04.247 "num_base_bdevs": 2, 00:10:04.247 "num_base_bdevs_discovered": 2, 00:10:04.247 "num_base_bdevs_operational": 2, 00:10:04.247 "base_bdevs_list": [ 00:10:04.247 { 00:10:04.247 "name": "pt1", 00:10:04.247 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.247 "is_configured": true, 00:10:04.247 "data_offset": 2048, 00:10:04.247 "data_size": 63488 00:10:04.247 }, 00:10:04.247 { 00:10:04.247 "name": "pt2", 00:10:04.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.247 "is_configured": true, 00:10:04.247 "data_offset": 2048, 00:10:04.247 "data_size": 63488 00:10:04.247 } 00:10:04.247 ] 00:10:04.247 }' 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.247 08:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.506 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:04.506 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:04.506 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.506 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.506 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.506 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.506 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:04.506 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.506 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.506 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.764 [2024-11-27 08:42:01.268215] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.764 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.764 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.764 "name": "raid_bdev1", 00:10:04.764 "aliases": [ 00:10:04.764 "0a5c7f50-259d-4180-b1aa-e05f08b91642" 00:10:04.764 ], 00:10:04.764 "product_name": "Raid Volume", 00:10:04.764 "block_size": 512, 00:10:04.764 "num_blocks": 126976, 00:10:04.764 "uuid": "0a5c7f50-259d-4180-b1aa-e05f08b91642", 00:10:04.764 "assigned_rate_limits": { 00:10:04.764 "rw_ios_per_sec": 0, 00:10:04.765 "rw_mbytes_per_sec": 0, 00:10:04.765 "r_mbytes_per_sec": 0, 00:10:04.765 "w_mbytes_per_sec": 0 00:10:04.765 }, 00:10:04.765 "claimed": false, 00:10:04.765 "zoned": false, 00:10:04.765 "supported_io_types": { 00:10:04.765 "read": true, 00:10:04.765 "write": true, 00:10:04.765 "unmap": true, 00:10:04.765 "flush": true, 00:10:04.765 "reset": true, 00:10:04.765 "nvme_admin": false, 00:10:04.765 "nvme_io": false, 00:10:04.765 "nvme_io_md": false, 00:10:04.765 "write_zeroes": true, 00:10:04.765 "zcopy": false, 00:10:04.765 "get_zone_info": false, 00:10:04.765 "zone_management": false, 00:10:04.765 "zone_append": false, 00:10:04.765 "compare": false, 00:10:04.765 "compare_and_write": false, 00:10:04.765 "abort": false, 00:10:04.765 "seek_hole": false, 00:10:04.765 "seek_data": false, 00:10:04.765 "copy": false, 00:10:04.765 "nvme_iov_md": false 00:10:04.765 }, 00:10:04.765 "memory_domains": [ 00:10:04.765 { 00:10:04.765 "dma_device_id": "system", 00:10:04.765 "dma_device_type": 1 00:10:04.765 }, 00:10:04.765 { 00:10:04.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.765 "dma_device_type": 2 00:10:04.765 }, 00:10:04.765 { 00:10:04.765 "dma_device_id": "system", 00:10:04.765 "dma_device_type": 1 00:10:04.765 }, 00:10:04.765 { 00:10:04.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.765 "dma_device_type": 2 00:10:04.765 } 00:10:04.765 ], 00:10:04.765 "driver_specific": { 00:10:04.765 "raid": { 00:10:04.765 "uuid": "0a5c7f50-259d-4180-b1aa-e05f08b91642", 00:10:04.765 "strip_size_kb": 64, 00:10:04.765 "state": "online", 00:10:04.765 "raid_level": "raid0", 00:10:04.765 "superblock": true, 00:10:04.765 "num_base_bdevs": 2, 00:10:04.765 "num_base_bdevs_discovered": 2, 00:10:04.765 "num_base_bdevs_operational": 2, 00:10:04.765 "base_bdevs_list": [ 00:10:04.765 { 00:10:04.765 "name": "pt1", 00:10:04.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.765 "is_configured": true, 00:10:04.765 "data_offset": 2048, 00:10:04.765 "data_size": 63488 00:10:04.765 }, 00:10:04.765 { 00:10:04.765 "name": "pt2", 00:10:04.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.765 "is_configured": true, 00:10:04.765 "data_offset": 2048, 00:10:04.765 "data_size": 63488 00:10:04.765 } 00:10:04.765 ] 00:10:04.765 } 00:10:04.765 } 00:10:04.765 }' 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:04.765 pt2' 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.765 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.023 [2024-11-27 08:42:01.532370] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0a5c7f50-259d-4180-b1aa-e05f08b91642 '!=' 0a5c7f50-259d-4180-b1aa-e05f08b91642 ']' 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61247 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' -z 61247 ']' 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # kill -0 61247 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # uname 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 61247 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:10:05.023 killing process with pid 61247 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 61247' 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # kill 61247 00:10:05.023 [2024-11-27 08:42:01.616671] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:05.023 08:42:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@975 -- # wait 61247 00:10:05.023 [2024-11-27 08:42:01.616846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.023 [2024-11-27 08:42:01.616923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.023 [2024-11-27 08:42:01.616943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:05.281 [2024-11-27 08:42:01.820595] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:06.658 08:42:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:06.658 00:10:06.658 real 0m5.091s 00:10:06.658 user 0m7.339s 00:10:06.658 sys 0m0.834s 00:10:06.658 08:42:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:06.658 ************************************ 00:10:06.658 END TEST raid_superblock_test 00:10:06.658 ************************************ 00:10:06.658 08:42:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.658 08:42:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:10:06.658 08:42:03 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:10:06.658 08:42:03 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:06.658 08:42:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:06.658 ************************************ 00:10:06.658 START TEST raid_read_error_test 00:10:06.658 ************************************ 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test raid0 2 read 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hk72seXQ6D 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61464 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61464 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # '[' -z 61464 ']' 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:06.658 08:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.658 [2024-11-27 08:42:03.199856] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:10:06.658 [2024-11-27 08:42:03.201035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61464 ] 00:10:06.658 [2024-11-27 08:42:03.402602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.917 [2024-11-27 08:42:03.581594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.176 [2024-11-27 08:42:03.819467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.176 [2024-11-27 08:42:03.819556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.744 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:07.744 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@865 -- # return 0 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.745 BaseBdev1_malloc 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.745 true 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.745 [2024-11-27 08:42:04.302015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:07.745 [2024-11-27 08:42:04.302640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.745 [2024-11-27 08:42:04.302716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:07.745 [2024-11-27 08:42:04.302754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.745 [2024-11-27 08:42:04.306454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.745 [2024-11-27 08:42:04.306615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:07.745 BaseBdev1 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.745 BaseBdev2_malloc 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.745 true 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.745 [2024-11-27 08:42:04.376762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:07.745 [2024-11-27 08:42:04.376851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.745 [2024-11-27 08:42:04.376889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:07.745 [2024-11-27 08:42:04.376919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.745 [2024-11-27 08:42:04.380169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.745 [2024-11-27 08:42:04.380229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:07.745 BaseBdev2 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.745 [2024-11-27 08:42:04.389092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.745 [2024-11-27 08:42:04.391920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.745 [2024-11-27 08:42:04.392250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:07.745 [2024-11-27 08:42:04.392285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:07.745 [2024-11-27 08:42:04.392670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:07.745 [2024-11-27 08:42:04.392973] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:07.745 [2024-11-27 08:42:04.393003] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:07.745 [2024-11-27 08:42:04.393281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.745 "name": "raid_bdev1", 00:10:07.745 "uuid": "40788ff2-330c-428b-83b7-d6f775281e9b", 00:10:07.745 "strip_size_kb": 64, 00:10:07.745 "state": "online", 00:10:07.745 "raid_level": "raid0", 00:10:07.745 "superblock": true, 00:10:07.745 "num_base_bdevs": 2, 00:10:07.745 "num_base_bdevs_discovered": 2, 00:10:07.745 "num_base_bdevs_operational": 2, 00:10:07.745 "base_bdevs_list": [ 00:10:07.745 { 00:10:07.745 "name": "BaseBdev1", 00:10:07.745 "uuid": "c258817c-0949-5c7c-9c6b-723d18b2f643", 00:10:07.745 "is_configured": true, 00:10:07.745 "data_offset": 2048, 00:10:07.745 "data_size": 63488 00:10:07.745 }, 00:10:07.745 { 00:10:07.745 "name": "BaseBdev2", 00:10:07.745 "uuid": "57773ad0-4146-5a9b-964e-48043ac6d2da", 00:10:07.745 "is_configured": true, 00:10:07.745 "data_offset": 2048, 00:10:07.745 "data_size": 63488 00:10:07.745 } 00:10:07.745 ] 00:10:07.745 }' 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.745 08:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.313 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:08.313 08:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:08.572 [2024-11-27 08:42:05.076298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.510 08:42:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.510 08:42:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.510 "name": "raid_bdev1", 00:10:09.510 "uuid": "40788ff2-330c-428b-83b7-d6f775281e9b", 00:10:09.510 "strip_size_kb": 64, 00:10:09.510 "state": "online", 00:10:09.510 "raid_level": "raid0", 00:10:09.510 "superblock": true, 00:10:09.510 "num_base_bdevs": 2, 00:10:09.510 "num_base_bdevs_discovered": 2, 00:10:09.510 "num_base_bdevs_operational": 2, 00:10:09.510 "base_bdevs_list": [ 00:10:09.510 { 00:10:09.510 "name": "BaseBdev1", 00:10:09.510 "uuid": "c258817c-0949-5c7c-9c6b-723d18b2f643", 00:10:09.510 "is_configured": true, 00:10:09.510 "data_offset": 2048, 00:10:09.510 "data_size": 63488 00:10:09.510 }, 00:10:09.510 { 00:10:09.510 "name": "BaseBdev2", 00:10:09.510 "uuid": "57773ad0-4146-5a9b-964e-48043ac6d2da", 00:10:09.510 "is_configured": true, 00:10:09.510 "data_offset": 2048, 00:10:09.510 "data_size": 63488 00:10:09.510 } 00:10:09.510 ] 00:10:09.510 }' 00:10:09.510 08:42:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.510 08:42:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.769 08:42:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:09.769 08:42:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.769 08:42:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.769 [2024-11-27 08:42:06.499988] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:09.769 [2024-11-27 08:42:06.500078] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.769 [2024-11-27 08:42:06.503563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.769 [2024-11-27 08:42:06.503631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.769 [2024-11-27 08:42:06.503674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.769 [2024-11-27 08:42:06.503692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:09.769 { 00:10:09.769 "results": [ 00:10:09.769 { 00:10:09.769 "job": "raid_bdev1", 00:10:09.769 "core_mask": "0x1", 00:10:09.769 "workload": "randrw", 00:10:09.769 "percentage": 50, 00:10:09.769 "status": "finished", 00:10:09.769 "queue_depth": 1, 00:10:09.769 "io_size": 131072, 00:10:09.769 "runtime": 1.421159, 00:10:09.769 "iops": 10411.924351884623, 00:10:09.769 "mibps": 1301.490543985578, 00:10:09.769 "io_failed": 1, 00:10:09.769 "io_timeout": 0, 00:10:09.769 "avg_latency_us": 134.31107987565886, 00:10:09.769 "min_latency_us": 37.93454545454546, 00:10:09.769 "max_latency_us": 1921.3963636363637 00:10:09.769 } 00:10:09.769 ], 00:10:09.769 "core_count": 1 00:10:09.769 } 00:10:09.769 08:42:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.769 08:42:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61464 00:10:09.769 08:42:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' -z 61464 ']' 00:10:09.769 08:42:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # kill -0 61464 00:10:09.769 08:42:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # uname 00:10:09.769 08:42:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:09.769 08:42:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 61464 00:10:10.028 08:42:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:10:10.028 08:42:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:10:10.028 killing process with pid 61464 00:10:10.028 08:42:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 61464' 00:10:10.028 08:42:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # kill 61464 00:10:10.028 [2024-11-27 08:42:06.537729] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.028 08:42:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@975 -- # wait 61464 00:10:10.028 [2024-11-27 08:42:06.664513] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:11.000 08:42:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hk72seXQ6D 00:10:11.000 08:42:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:11.000 08:42:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:11.000 08:42:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:11.000 08:42:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:11.000 08:42:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.000 08:42:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:11.000 08:42:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:11.000 00:10:11.000 real 0m4.664s 00:10:11.000 user 0m5.839s 00:10:11.000 sys 0m0.674s 00:10:11.000 08:42:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:11.000 08:42:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.000 ************************************ 00:10:11.000 END TEST raid_read_error_test 00:10:11.000 ************************************ 00:10:11.259 08:42:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:10:11.259 08:42:07 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:10:11.259 08:42:07 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:11.259 08:42:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.259 ************************************ 00:10:11.259 START TEST raid_write_error_test 00:10:11.259 ************************************ 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test raid0 2 write 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EAKtrXcScy 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61614 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61614 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # '[' -z 61614 ']' 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:11.259 08:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.259 [2024-11-27 08:42:07.965244] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:10:11.259 [2024-11-27 08:42:07.965432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61614 ] 00:10:11.518 [2024-11-27 08:42:08.149987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.777 [2024-11-27 08:42:08.278196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.777 [2024-11-27 08:42:08.484966] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.777 [2024-11-27 08:42:08.485045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@865 -- # return 0 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.345 BaseBdev1_malloc 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.345 true 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.345 [2024-11-27 08:42:08.942714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:12.345 [2024-11-27 08:42:08.942938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.345 [2024-11-27 08:42:08.942979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:12.345 [2024-11-27 08:42:08.942999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.345 [2024-11-27 08:42:08.946184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.345 [2024-11-27 08:42:08.946410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:12.345 BaseBdev1 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.345 BaseBdev2_malloc 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.345 08:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.345 true 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.345 [2024-11-27 08:42:09.013228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:12.345 [2024-11-27 08:42:09.013352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.345 [2024-11-27 08:42:09.013382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:12.345 [2024-11-27 08:42:09.013405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.345 [2024-11-27 08:42:09.016396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.345 [2024-11-27 08:42:09.016446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:12.345 BaseBdev2 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.345 [2024-11-27 08:42:09.021381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.345 [2024-11-27 08:42:09.024232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.345 [2024-11-27 08:42:09.024626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:12.345 [2024-11-27 08:42:09.024768] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:12.345 [2024-11-27 08:42:09.025137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:12.345 [2024-11-27 08:42:09.025523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:12.345 [2024-11-27 08:42:09.025654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:12.345 [2024-11-27 08:42:09.026041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.345 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.345 "name": "raid_bdev1", 00:10:12.345 "uuid": "b22cb086-5ff0-4a2c-96d8-89a4eff19468", 00:10:12.345 "strip_size_kb": 64, 00:10:12.345 "state": "online", 00:10:12.345 "raid_level": "raid0", 00:10:12.345 "superblock": true, 00:10:12.345 "num_base_bdevs": 2, 00:10:12.345 "num_base_bdevs_discovered": 2, 00:10:12.345 "num_base_bdevs_operational": 2, 00:10:12.345 "base_bdevs_list": [ 00:10:12.345 { 00:10:12.345 "name": "BaseBdev1", 00:10:12.345 "uuid": "727e9b77-fb60-5ce2-9f25-52adaabd2cb8", 00:10:12.345 "is_configured": true, 00:10:12.345 "data_offset": 2048, 00:10:12.345 "data_size": 63488 00:10:12.345 }, 00:10:12.345 { 00:10:12.345 "name": "BaseBdev2", 00:10:12.345 "uuid": "4b2fb5d4-47be-5e57-8dc5-1a7423c5c996", 00:10:12.345 "is_configured": true, 00:10:12.345 "data_offset": 2048, 00:10:12.345 "data_size": 63488 00:10:12.345 } 00:10:12.345 ] 00:10:12.345 }' 00:10:12.346 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.346 08:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.946 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:12.946 08:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:13.205 [2024-11-27 08:42:09.707873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.141 "name": "raid_bdev1", 00:10:14.141 "uuid": "b22cb086-5ff0-4a2c-96d8-89a4eff19468", 00:10:14.141 "strip_size_kb": 64, 00:10:14.141 "state": "online", 00:10:14.141 "raid_level": "raid0", 00:10:14.141 "superblock": true, 00:10:14.141 "num_base_bdevs": 2, 00:10:14.141 "num_base_bdevs_discovered": 2, 00:10:14.141 "num_base_bdevs_operational": 2, 00:10:14.141 "base_bdevs_list": [ 00:10:14.141 { 00:10:14.141 "name": "BaseBdev1", 00:10:14.141 "uuid": "727e9b77-fb60-5ce2-9f25-52adaabd2cb8", 00:10:14.141 "is_configured": true, 00:10:14.141 "data_offset": 2048, 00:10:14.141 "data_size": 63488 00:10:14.141 }, 00:10:14.141 { 00:10:14.141 "name": "BaseBdev2", 00:10:14.141 "uuid": "4b2fb5d4-47be-5e57-8dc5-1a7423c5c996", 00:10:14.141 "is_configured": true, 00:10:14.141 "data_offset": 2048, 00:10:14.141 "data_size": 63488 00:10:14.141 } 00:10:14.141 ] 00:10:14.141 }' 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.141 08:42:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.400 08:42:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:14.400 08:42:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.400 08:42:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.400 [2024-11-27 08:42:11.126175] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.400 [2024-11-27 08:42:11.126219] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.400 [2024-11-27 08:42:11.129633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.400 [2024-11-27 08:42:11.129702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.400 [2024-11-27 08:42:11.129752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.400 [2024-11-27 08:42:11.129773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:14.400 { 00:10:14.400 "results": [ 00:10:14.400 { 00:10:14.400 "job": "raid_bdev1", 00:10:14.400 "core_mask": "0x1", 00:10:14.400 "workload": "randrw", 00:10:14.400 "percentage": 50, 00:10:14.400 "status": "finished", 00:10:14.400 "queue_depth": 1, 00:10:14.400 "io_size": 131072, 00:10:14.400 "runtime": 1.41547, 00:10:14.400 "iops": 10069.446897496944, 00:10:14.400 "mibps": 1258.680862187118, 00:10:14.400 "io_failed": 1, 00:10:14.400 "io_timeout": 0, 00:10:14.400 "avg_latency_us": 139.20898401724557, 00:10:14.400 "min_latency_us": 38.86545454545455, 00:10:14.400 "max_latency_us": 1936.290909090909 00:10:14.400 } 00:10:14.400 ], 00:10:14.400 "core_count": 1 00:10:14.400 } 00:10:14.400 08:42:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.400 08:42:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61614 00:10:14.400 08:42:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' -z 61614 ']' 00:10:14.400 08:42:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # kill -0 61614 00:10:14.400 08:42:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # uname 00:10:14.400 08:42:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:14.400 08:42:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 61614 00:10:14.659 killing process with pid 61614 00:10:14.659 08:42:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:10:14.659 08:42:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:10:14.659 08:42:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 61614' 00:10:14.659 08:42:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # kill 61614 00:10:14.659 [2024-11-27 08:42:11.168947] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.659 08:42:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@975 -- # wait 61614 00:10:14.659 [2024-11-27 08:42:11.307825] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.124 08:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EAKtrXcScy 00:10:16.124 08:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:16.124 08:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:16.124 08:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:16.124 08:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:16.124 08:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:16.124 08:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:16.124 08:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:16.124 00:10:16.124 real 0m4.675s 00:10:16.124 user 0m5.808s 00:10:16.124 sys 0m0.616s 00:10:16.124 08:42:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:16.124 ************************************ 00:10:16.124 END TEST raid_write_error_test 00:10:16.124 ************************************ 00:10:16.124 08:42:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.124 08:42:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:16.124 08:42:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:16.124 08:42:12 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:10:16.124 08:42:12 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:16.124 08:42:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.124 ************************************ 00:10:16.124 START TEST raid_state_function_test 00:10:16.124 ************************************ 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # raid_state_function_test concat 2 false 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:16.124 Process raid pid: 61753 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61753 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61753' 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61753 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # '[' -z 61753 ']' 00:10:16.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:16.124 08:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.124 [2024-11-27 08:42:12.656530] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:10:16.125 [2024-11-27 08:42:12.656760] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.125 [2024-11-27 08:42:12.843818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.383 [2024-11-27 08:42:12.972601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.642 [2024-11-27 08:42:13.178388] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.642 [2024-11-27 08:42:13.178590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@865 -- # return 0 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.901 [2024-11-27 08:42:13.624627] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.901 [2024-11-27 08:42:13.624694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.901 [2024-11-27 08:42:13.624712] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.901 [2024-11-27 08:42:13.624728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.901 08:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.160 08:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.160 "name": "Existed_Raid", 00:10:17.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.160 "strip_size_kb": 64, 00:10:17.160 "state": "configuring", 00:10:17.161 "raid_level": "concat", 00:10:17.161 "superblock": false, 00:10:17.161 "num_base_bdevs": 2, 00:10:17.161 "num_base_bdevs_discovered": 0, 00:10:17.161 "num_base_bdevs_operational": 2, 00:10:17.161 "base_bdevs_list": [ 00:10:17.161 { 00:10:17.161 "name": "BaseBdev1", 00:10:17.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.161 "is_configured": false, 00:10:17.161 "data_offset": 0, 00:10:17.161 "data_size": 0 00:10:17.161 }, 00:10:17.161 { 00:10:17.161 "name": "BaseBdev2", 00:10:17.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.161 "is_configured": false, 00:10:17.161 "data_offset": 0, 00:10:17.161 "data_size": 0 00:10:17.161 } 00:10:17.161 ] 00:10:17.161 }' 00:10:17.161 08:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.161 08:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.419 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.419 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.419 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.419 [2024-11-27 08:42:14.152759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.419 [2024-11-27 08:42:14.152806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:17.419 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.419 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:17.419 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.419 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.419 [2024-11-27 08:42:14.160771] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.419 [2024-11-27 08:42:14.160827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.419 [2024-11-27 08:42:14.160843] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.419 [2024-11-27 08:42:14.160862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.419 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.419 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:17.419 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.419 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.678 [2024-11-27 08:42:14.206630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.679 BaseBdev1 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.679 [ 00:10:17.679 { 00:10:17.679 "name": "BaseBdev1", 00:10:17.679 "aliases": [ 00:10:17.679 "0de0581d-1884-42b4-aceb-c9b438e9b2e8" 00:10:17.679 ], 00:10:17.679 "product_name": "Malloc disk", 00:10:17.679 "block_size": 512, 00:10:17.679 "num_blocks": 65536, 00:10:17.679 "uuid": "0de0581d-1884-42b4-aceb-c9b438e9b2e8", 00:10:17.679 "assigned_rate_limits": { 00:10:17.679 "rw_ios_per_sec": 0, 00:10:17.679 "rw_mbytes_per_sec": 0, 00:10:17.679 "r_mbytes_per_sec": 0, 00:10:17.679 "w_mbytes_per_sec": 0 00:10:17.679 }, 00:10:17.679 "claimed": true, 00:10:17.679 "claim_type": "exclusive_write", 00:10:17.679 "zoned": false, 00:10:17.679 "supported_io_types": { 00:10:17.679 "read": true, 00:10:17.679 "write": true, 00:10:17.679 "unmap": true, 00:10:17.679 "flush": true, 00:10:17.679 "reset": true, 00:10:17.679 "nvme_admin": false, 00:10:17.679 "nvme_io": false, 00:10:17.679 "nvme_io_md": false, 00:10:17.679 "write_zeroes": true, 00:10:17.679 "zcopy": true, 00:10:17.679 "get_zone_info": false, 00:10:17.679 "zone_management": false, 00:10:17.679 "zone_append": false, 00:10:17.679 "compare": false, 00:10:17.679 "compare_and_write": false, 00:10:17.679 "abort": true, 00:10:17.679 "seek_hole": false, 00:10:17.679 "seek_data": false, 00:10:17.679 "copy": true, 00:10:17.679 "nvme_iov_md": false 00:10:17.679 }, 00:10:17.679 "memory_domains": [ 00:10:17.679 { 00:10:17.679 "dma_device_id": "system", 00:10:17.679 "dma_device_type": 1 00:10:17.679 }, 00:10:17.679 { 00:10:17.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.679 "dma_device_type": 2 00:10:17.679 } 00:10:17.679 ], 00:10:17.679 "driver_specific": {} 00:10:17.679 } 00:10:17.679 ] 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.679 "name": "Existed_Raid", 00:10:17.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.679 "strip_size_kb": 64, 00:10:17.679 "state": "configuring", 00:10:17.679 "raid_level": "concat", 00:10:17.679 "superblock": false, 00:10:17.679 "num_base_bdevs": 2, 00:10:17.679 "num_base_bdevs_discovered": 1, 00:10:17.679 "num_base_bdevs_operational": 2, 00:10:17.679 "base_bdevs_list": [ 00:10:17.679 { 00:10:17.679 "name": "BaseBdev1", 00:10:17.679 "uuid": "0de0581d-1884-42b4-aceb-c9b438e9b2e8", 00:10:17.679 "is_configured": true, 00:10:17.679 "data_offset": 0, 00:10:17.679 "data_size": 65536 00:10:17.679 }, 00:10:17.679 { 00:10:17.679 "name": "BaseBdev2", 00:10:17.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.679 "is_configured": false, 00:10:17.679 "data_offset": 0, 00:10:17.679 "data_size": 0 00:10:17.679 } 00:10:17.679 ] 00:10:17.679 }' 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.679 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.246 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.247 [2024-11-27 08:42:14.746901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:18.247 [2024-11-27 08:42:14.747110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.247 [2024-11-27 08:42:14.754944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.247 [2024-11-27 08:42:14.757918] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.247 [2024-11-27 08:42:14.757974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.247 "name": "Existed_Raid", 00:10:18.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.247 "strip_size_kb": 64, 00:10:18.247 "state": "configuring", 00:10:18.247 "raid_level": "concat", 00:10:18.247 "superblock": false, 00:10:18.247 "num_base_bdevs": 2, 00:10:18.247 "num_base_bdevs_discovered": 1, 00:10:18.247 "num_base_bdevs_operational": 2, 00:10:18.247 "base_bdevs_list": [ 00:10:18.247 { 00:10:18.247 "name": "BaseBdev1", 00:10:18.247 "uuid": "0de0581d-1884-42b4-aceb-c9b438e9b2e8", 00:10:18.247 "is_configured": true, 00:10:18.247 "data_offset": 0, 00:10:18.247 "data_size": 65536 00:10:18.247 }, 00:10:18.247 { 00:10:18.247 "name": "BaseBdev2", 00:10:18.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.247 "is_configured": false, 00:10:18.247 "data_offset": 0, 00:10:18.247 "data_size": 0 00:10:18.247 } 00:10:18.247 ] 00:10:18.247 }' 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.247 08:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.506 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:18.506 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.506 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.765 [2024-11-27 08:42:15.291030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.765 [2024-11-27 08:42:15.291418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:18.765 [2024-11-27 08:42:15.291474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:18.765 [2024-11-27 08:42:15.291938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:18.765 [2024-11-27 08:42:15.292163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:18.765 [2024-11-27 08:42:15.292188] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:18.765 [2024-11-27 08:42:15.292553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.765 BaseBdev2 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.765 [ 00:10:18.765 { 00:10:18.765 "name": "BaseBdev2", 00:10:18.765 "aliases": [ 00:10:18.765 "4f10c6ca-01b6-4475-b63d-98c2116b9933" 00:10:18.765 ], 00:10:18.765 "product_name": "Malloc disk", 00:10:18.765 "block_size": 512, 00:10:18.765 "num_blocks": 65536, 00:10:18.765 "uuid": "4f10c6ca-01b6-4475-b63d-98c2116b9933", 00:10:18.765 "assigned_rate_limits": { 00:10:18.765 "rw_ios_per_sec": 0, 00:10:18.765 "rw_mbytes_per_sec": 0, 00:10:18.765 "r_mbytes_per_sec": 0, 00:10:18.765 "w_mbytes_per_sec": 0 00:10:18.765 }, 00:10:18.765 "claimed": true, 00:10:18.765 "claim_type": "exclusive_write", 00:10:18.765 "zoned": false, 00:10:18.765 "supported_io_types": { 00:10:18.765 "read": true, 00:10:18.765 "write": true, 00:10:18.765 "unmap": true, 00:10:18.765 "flush": true, 00:10:18.765 "reset": true, 00:10:18.765 "nvme_admin": false, 00:10:18.765 "nvme_io": false, 00:10:18.765 "nvme_io_md": false, 00:10:18.765 "write_zeroes": true, 00:10:18.765 "zcopy": true, 00:10:18.765 "get_zone_info": false, 00:10:18.765 "zone_management": false, 00:10:18.765 "zone_append": false, 00:10:18.765 "compare": false, 00:10:18.765 "compare_and_write": false, 00:10:18.765 "abort": true, 00:10:18.765 "seek_hole": false, 00:10:18.765 "seek_data": false, 00:10:18.765 "copy": true, 00:10:18.765 "nvme_iov_md": false 00:10:18.765 }, 00:10:18.765 "memory_domains": [ 00:10:18.765 { 00:10:18.765 "dma_device_id": "system", 00:10:18.765 "dma_device_type": 1 00:10:18.765 }, 00:10:18.765 { 00:10:18.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.765 "dma_device_type": 2 00:10:18.765 } 00:10:18.765 ], 00:10:18.765 "driver_specific": {} 00:10:18.765 } 00:10:18.765 ] 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.765 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.766 "name": "Existed_Raid", 00:10:18.766 "uuid": "d402ac31-4d13-429e-bdd9-6174b5cf6594", 00:10:18.766 "strip_size_kb": 64, 00:10:18.766 "state": "online", 00:10:18.766 "raid_level": "concat", 00:10:18.766 "superblock": false, 00:10:18.766 "num_base_bdevs": 2, 00:10:18.766 "num_base_bdevs_discovered": 2, 00:10:18.766 "num_base_bdevs_operational": 2, 00:10:18.766 "base_bdevs_list": [ 00:10:18.766 { 00:10:18.766 "name": "BaseBdev1", 00:10:18.766 "uuid": "0de0581d-1884-42b4-aceb-c9b438e9b2e8", 00:10:18.766 "is_configured": true, 00:10:18.766 "data_offset": 0, 00:10:18.766 "data_size": 65536 00:10:18.766 }, 00:10:18.766 { 00:10:18.766 "name": "BaseBdev2", 00:10:18.766 "uuid": "4f10c6ca-01b6-4475-b63d-98c2116b9933", 00:10:18.766 "is_configured": true, 00:10:18.766 "data_offset": 0, 00:10:18.766 "data_size": 65536 00:10:18.766 } 00:10:18.766 ] 00:10:18.766 }' 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.766 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.371 [2024-11-27 08:42:15.847665] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.371 "name": "Existed_Raid", 00:10:19.371 "aliases": [ 00:10:19.371 "d402ac31-4d13-429e-bdd9-6174b5cf6594" 00:10:19.371 ], 00:10:19.371 "product_name": "Raid Volume", 00:10:19.371 "block_size": 512, 00:10:19.371 "num_blocks": 131072, 00:10:19.371 "uuid": "d402ac31-4d13-429e-bdd9-6174b5cf6594", 00:10:19.371 "assigned_rate_limits": { 00:10:19.371 "rw_ios_per_sec": 0, 00:10:19.371 "rw_mbytes_per_sec": 0, 00:10:19.371 "r_mbytes_per_sec": 0, 00:10:19.371 "w_mbytes_per_sec": 0 00:10:19.371 }, 00:10:19.371 "claimed": false, 00:10:19.371 "zoned": false, 00:10:19.371 "supported_io_types": { 00:10:19.371 "read": true, 00:10:19.371 "write": true, 00:10:19.371 "unmap": true, 00:10:19.371 "flush": true, 00:10:19.371 "reset": true, 00:10:19.371 "nvme_admin": false, 00:10:19.371 "nvme_io": false, 00:10:19.371 "nvme_io_md": false, 00:10:19.371 "write_zeroes": true, 00:10:19.371 "zcopy": false, 00:10:19.371 "get_zone_info": false, 00:10:19.371 "zone_management": false, 00:10:19.371 "zone_append": false, 00:10:19.371 "compare": false, 00:10:19.371 "compare_and_write": false, 00:10:19.371 "abort": false, 00:10:19.371 "seek_hole": false, 00:10:19.371 "seek_data": false, 00:10:19.371 "copy": false, 00:10:19.371 "nvme_iov_md": false 00:10:19.371 }, 00:10:19.371 "memory_domains": [ 00:10:19.371 { 00:10:19.371 "dma_device_id": "system", 00:10:19.371 "dma_device_type": 1 00:10:19.371 }, 00:10:19.371 { 00:10:19.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.371 "dma_device_type": 2 00:10:19.371 }, 00:10:19.371 { 00:10:19.371 "dma_device_id": "system", 00:10:19.371 "dma_device_type": 1 00:10:19.371 }, 00:10:19.371 { 00:10:19.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.371 "dma_device_type": 2 00:10:19.371 } 00:10:19.371 ], 00:10:19.371 "driver_specific": { 00:10:19.371 "raid": { 00:10:19.371 "uuid": "d402ac31-4d13-429e-bdd9-6174b5cf6594", 00:10:19.371 "strip_size_kb": 64, 00:10:19.371 "state": "online", 00:10:19.371 "raid_level": "concat", 00:10:19.371 "superblock": false, 00:10:19.371 "num_base_bdevs": 2, 00:10:19.371 "num_base_bdevs_discovered": 2, 00:10:19.371 "num_base_bdevs_operational": 2, 00:10:19.371 "base_bdevs_list": [ 00:10:19.371 { 00:10:19.371 "name": "BaseBdev1", 00:10:19.371 "uuid": "0de0581d-1884-42b4-aceb-c9b438e9b2e8", 00:10:19.371 "is_configured": true, 00:10:19.371 "data_offset": 0, 00:10:19.371 "data_size": 65536 00:10:19.371 }, 00:10:19.371 { 00:10:19.371 "name": "BaseBdev2", 00:10:19.371 "uuid": "4f10c6ca-01b6-4475-b63d-98c2116b9933", 00:10:19.371 "is_configured": true, 00:10:19.371 "data_offset": 0, 00:10:19.371 "data_size": 65536 00:10:19.371 } 00:10:19.371 ] 00:10:19.371 } 00:10:19.371 } 00:10:19.371 }' 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:19.371 BaseBdev2' 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.371 08:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.371 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.371 [2024-11-27 08:42:16.111439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:19.371 [2024-11-27 08:42:16.111491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.371 [2024-11-27 08:42:16.111561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.630 "name": "Existed_Raid", 00:10:19.630 "uuid": "d402ac31-4d13-429e-bdd9-6174b5cf6594", 00:10:19.630 "strip_size_kb": 64, 00:10:19.630 "state": "offline", 00:10:19.630 "raid_level": "concat", 00:10:19.630 "superblock": false, 00:10:19.630 "num_base_bdevs": 2, 00:10:19.630 "num_base_bdevs_discovered": 1, 00:10:19.630 "num_base_bdevs_operational": 1, 00:10:19.630 "base_bdevs_list": [ 00:10:19.630 { 00:10:19.630 "name": null, 00:10:19.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.630 "is_configured": false, 00:10:19.630 "data_offset": 0, 00:10:19.630 "data_size": 65536 00:10:19.630 }, 00:10:19.630 { 00:10:19.630 "name": "BaseBdev2", 00:10:19.630 "uuid": "4f10c6ca-01b6-4475-b63d-98c2116b9933", 00:10:19.630 "is_configured": true, 00:10:19.630 "data_offset": 0, 00:10:19.630 "data_size": 65536 00:10:19.630 } 00:10:19.630 ] 00:10:19.630 }' 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.630 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.198 [2024-11-27 08:42:16.802519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:20.198 [2024-11-27 08:42:16.802820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61753 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' -z 61753 ']' 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # kill -0 61753 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # uname 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:20.198 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 61753 00:10:20.457 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:10:20.457 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:10:20.457 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 61753' 00:10:20.457 killing process with pid 61753 00:10:20.457 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # kill 61753 00:10:20.457 [2024-11-27 08:42:16.978713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.457 08:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@975 -- # wait 61753 00:10:20.457 [2024-11-27 08:42:16.993625] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:21.394 00:10:21.394 real 0m5.564s 00:10:21.394 user 0m8.364s 00:10:21.394 sys 0m0.807s 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.394 ************************************ 00:10:21.394 END TEST raid_state_function_test 00:10:21.394 ************************************ 00:10:21.394 08:42:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:21.394 08:42:18 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:10:21.394 08:42:18 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:21.394 08:42:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.394 ************************************ 00:10:21.394 START TEST raid_state_function_test_sb 00:10:21.394 ************************************ 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # raid_state_function_test concat 2 true 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62012 00:10:21.394 Process raid pid: 62012 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62012' 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62012 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # '[' -z 62012 ']' 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:21.394 08:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.653 [2024-11-27 08:42:18.250490] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:10:21.653 [2024-11-27 08:42:18.251004] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.913 [2024-11-27 08:42:18.438475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.913 [2024-11-27 08:42:18.573888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.172 [2024-11-27 08:42:18.793571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.172 [2024-11-27 08:42:18.793609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@865 -- # return 0 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.742 [2024-11-27 08:42:19.197917] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.742 [2024-11-27 08:42:19.197982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.742 [2024-11-27 08:42:19.198000] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.742 [2024-11-27 08:42:19.198017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.742 "name": "Existed_Raid", 00:10:22.742 "uuid": "3240536a-3d9f-4fb1-bc1c-ebd9d6fd8789", 00:10:22.742 "strip_size_kb": 64, 00:10:22.742 "state": "configuring", 00:10:22.742 "raid_level": "concat", 00:10:22.742 "superblock": true, 00:10:22.742 "num_base_bdevs": 2, 00:10:22.742 "num_base_bdevs_discovered": 0, 00:10:22.742 "num_base_bdevs_operational": 2, 00:10:22.742 "base_bdevs_list": [ 00:10:22.742 { 00:10:22.742 "name": "BaseBdev1", 00:10:22.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.742 "is_configured": false, 00:10:22.742 "data_offset": 0, 00:10:22.742 "data_size": 0 00:10:22.742 }, 00:10:22.742 { 00:10:22.742 "name": "BaseBdev2", 00:10:22.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.742 "is_configured": false, 00:10:22.742 "data_offset": 0, 00:10:22.742 "data_size": 0 00:10:22.742 } 00:10:22.742 ] 00:10:22.742 }' 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.742 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.002 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.002 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.002 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.002 [2024-11-27 08:42:19.730211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.002 [2024-11-27 08:42:19.730279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:23.002 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.002 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:23.002 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.002 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.002 [2024-11-27 08:42:19.742085] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:23.002 [2024-11-27 08:42:19.742153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:23.002 [2024-11-27 08:42:19.742170] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.002 [2024-11-27 08:42:19.742190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.002 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.002 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:23.002 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.002 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.333 [2024-11-27 08:42:19.792874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.333 BaseBdev1 00:10:23.333 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.333 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:23.333 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:10:23.333 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:10:23.333 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:10:23.333 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:10:23.333 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:10:23.333 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:10:23.333 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.333 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.333 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.333 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:23.333 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.334 [ 00:10:23.334 { 00:10:23.334 "name": "BaseBdev1", 00:10:23.334 "aliases": [ 00:10:23.334 "3e4b894c-f790-4f14-a35a-8c12406b6f7b" 00:10:23.334 ], 00:10:23.334 "product_name": "Malloc disk", 00:10:23.334 "block_size": 512, 00:10:23.334 "num_blocks": 65536, 00:10:23.334 "uuid": "3e4b894c-f790-4f14-a35a-8c12406b6f7b", 00:10:23.334 "assigned_rate_limits": { 00:10:23.334 "rw_ios_per_sec": 0, 00:10:23.334 "rw_mbytes_per_sec": 0, 00:10:23.334 "r_mbytes_per_sec": 0, 00:10:23.334 "w_mbytes_per_sec": 0 00:10:23.334 }, 00:10:23.334 "claimed": true, 00:10:23.334 "claim_type": "exclusive_write", 00:10:23.334 "zoned": false, 00:10:23.334 "supported_io_types": { 00:10:23.334 "read": true, 00:10:23.334 "write": true, 00:10:23.334 "unmap": true, 00:10:23.334 "flush": true, 00:10:23.334 "reset": true, 00:10:23.334 "nvme_admin": false, 00:10:23.334 "nvme_io": false, 00:10:23.334 "nvme_io_md": false, 00:10:23.334 "write_zeroes": true, 00:10:23.334 "zcopy": true, 00:10:23.334 "get_zone_info": false, 00:10:23.334 "zone_management": false, 00:10:23.334 "zone_append": false, 00:10:23.334 "compare": false, 00:10:23.334 "compare_and_write": false, 00:10:23.334 "abort": true, 00:10:23.334 "seek_hole": false, 00:10:23.334 "seek_data": false, 00:10:23.334 "copy": true, 00:10:23.334 "nvme_iov_md": false 00:10:23.334 }, 00:10:23.334 "memory_domains": [ 00:10:23.334 { 00:10:23.334 "dma_device_id": "system", 00:10:23.334 "dma_device_type": 1 00:10:23.334 }, 00:10:23.334 { 00:10:23.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.334 "dma_device_type": 2 00:10:23.334 } 00:10:23.334 ], 00:10:23.334 "driver_specific": {} 00:10:23.334 } 00:10:23.334 ] 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.334 "name": "Existed_Raid", 00:10:23.334 "uuid": "94e06f4b-7349-40ab-9713-7ebe7a53c406", 00:10:23.334 "strip_size_kb": 64, 00:10:23.334 "state": "configuring", 00:10:23.334 "raid_level": "concat", 00:10:23.334 "superblock": true, 00:10:23.334 "num_base_bdevs": 2, 00:10:23.334 "num_base_bdevs_discovered": 1, 00:10:23.334 "num_base_bdevs_operational": 2, 00:10:23.334 "base_bdevs_list": [ 00:10:23.334 { 00:10:23.334 "name": "BaseBdev1", 00:10:23.334 "uuid": "3e4b894c-f790-4f14-a35a-8c12406b6f7b", 00:10:23.334 "is_configured": true, 00:10:23.334 "data_offset": 2048, 00:10:23.334 "data_size": 63488 00:10:23.334 }, 00:10:23.334 { 00:10:23.334 "name": "BaseBdev2", 00:10:23.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.334 "is_configured": false, 00:10:23.334 "data_offset": 0, 00:10:23.334 "data_size": 0 00:10:23.334 } 00:10:23.334 ] 00:10:23.334 }' 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.334 08:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.908 [2024-11-27 08:42:20.361115] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.908 [2024-11-27 08:42:20.361481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.908 [2024-11-27 08:42:20.373177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.908 [2024-11-27 08:42:20.376095] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.908 [2024-11-27 08:42:20.376316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.908 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.909 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.909 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.909 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.909 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.909 "name": "Existed_Raid", 00:10:23.909 "uuid": "2ad46966-4996-4228-b5a3-f8428fe06ccf", 00:10:23.909 "strip_size_kb": 64, 00:10:23.909 "state": "configuring", 00:10:23.909 "raid_level": "concat", 00:10:23.909 "superblock": true, 00:10:23.909 "num_base_bdevs": 2, 00:10:23.909 "num_base_bdevs_discovered": 1, 00:10:23.909 "num_base_bdevs_operational": 2, 00:10:23.909 "base_bdevs_list": [ 00:10:23.909 { 00:10:23.909 "name": "BaseBdev1", 00:10:23.909 "uuid": "3e4b894c-f790-4f14-a35a-8c12406b6f7b", 00:10:23.909 "is_configured": true, 00:10:23.909 "data_offset": 2048, 00:10:23.909 "data_size": 63488 00:10:23.909 }, 00:10:23.909 { 00:10:23.909 "name": "BaseBdev2", 00:10:23.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.909 "is_configured": false, 00:10:23.909 "data_offset": 0, 00:10:23.909 "data_size": 0 00:10:23.909 } 00:10:23.909 ] 00:10:23.909 }' 00:10:23.909 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.909 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.176 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:24.176 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.176 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.176 [2024-11-27 08:42:20.928308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.176 [2024-11-27 08:42:20.928684] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:24.176 [2024-11-27 08:42:20.928705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:24.176 BaseBdev2 00:10:24.176 [2024-11-27 08:42:20.929051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:24.176 [2024-11-27 08:42:20.929257] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:24.176 [2024-11-27 08:42:20.929279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:24.176 [2024-11-27 08:42:20.929518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.176 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.176 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:24.176 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:10:24.176 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:10:24.176 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:10:24.176 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:10:24.176 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:10:24.176 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:10:24.176 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.176 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.436 [ 00:10:24.436 { 00:10:24.436 "name": "BaseBdev2", 00:10:24.436 "aliases": [ 00:10:24.436 "f3428af7-6efd-4bff-af24-5dc0ff7e5336" 00:10:24.436 ], 00:10:24.436 "product_name": "Malloc disk", 00:10:24.436 "block_size": 512, 00:10:24.436 "num_blocks": 65536, 00:10:24.436 "uuid": "f3428af7-6efd-4bff-af24-5dc0ff7e5336", 00:10:24.436 "assigned_rate_limits": { 00:10:24.436 "rw_ios_per_sec": 0, 00:10:24.436 "rw_mbytes_per_sec": 0, 00:10:24.436 "r_mbytes_per_sec": 0, 00:10:24.436 "w_mbytes_per_sec": 0 00:10:24.436 }, 00:10:24.436 "claimed": true, 00:10:24.436 "claim_type": "exclusive_write", 00:10:24.436 "zoned": false, 00:10:24.436 "supported_io_types": { 00:10:24.436 "read": true, 00:10:24.436 "write": true, 00:10:24.436 "unmap": true, 00:10:24.436 "flush": true, 00:10:24.436 "reset": true, 00:10:24.436 "nvme_admin": false, 00:10:24.436 "nvme_io": false, 00:10:24.436 "nvme_io_md": false, 00:10:24.436 "write_zeroes": true, 00:10:24.436 "zcopy": true, 00:10:24.436 "get_zone_info": false, 00:10:24.436 "zone_management": false, 00:10:24.436 "zone_append": false, 00:10:24.436 "compare": false, 00:10:24.436 "compare_and_write": false, 00:10:24.436 "abort": true, 00:10:24.436 "seek_hole": false, 00:10:24.436 "seek_data": false, 00:10:24.436 "copy": true, 00:10:24.436 "nvme_iov_md": false 00:10:24.436 }, 00:10:24.436 "memory_domains": [ 00:10:24.436 { 00:10:24.436 "dma_device_id": "system", 00:10:24.436 "dma_device_type": 1 00:10:24.436 }, 00:10:24.436 { 00:10:24.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.436 "dma_device_type": 2 00:10:24.436 } 00:10:24.436 ], 00:10:24.436 "driver_specific": {} 00:10:24.436 } 00:10:24.436 ] 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.436 08:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.436 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.436 "name": "Existed_Raid", 00:10:24.436 "uuid": "2ad46966-4996-4228-b5a3-f8428fe06ccf", 00:10:24.436 "strip_size_kb": 64, 00:10:24.436 "state": "online", 00:10:24.436 "raid_level": "concat", 00:10:24.436 "superblock": true, 00:10:24.436 "num_base_bdevs": 2, 00:10:24.436 "num_base_bdevs_discovered": 2, 00:10:24.436 "num_base_bdevs_operational": 2, 00:10:24.436 "base_bdevs_list": [ 00:10:24.436 { 00:10:24.436 "name": "BaseBdev1", 00:10:24.436 "uuid": "3e4b894c-f790-4f14-a35a-8c12406b6f7b", 00:10:24.436 "is_configured": true, 00:10:24.436 "data_offset": 2048, 00:10:24.436 "data_size": 63488 00:10:24.436 }, 00:10:24.436 { 00:10:24.436 "name": "BaseBdev2", 00:10:24.436 "uuid": "f3428af7-6efd-4bff-af24-5dc0ff7e5336", 00:10:24.436 "is_configured": true, 00:10:24.436 "data_offset": 2048, 00:10:24.436 "data_size": 63488 00:10:24.436 } 00:10:24.436 ] 00:10:24.436 }' 00:10:24.436 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.436 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.003 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:25.003 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:25.003 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.003 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.003 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.003 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.003 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:25.003 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.003 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.003 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.003 [2024-11-27 08:42:21.472948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.003 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.003 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.004 "name": "Existed_Raid", 00:10:25.004 "aliases": [ 00:10:25.004 "2ad46966-4996-4228-b5a3-f8428fe06ccf" 00:10:25.004 ], 00:10:25.004 "product_name": "Raid Volume", 00:10:25.004 "block_size": 512, 00:10:25.004 "num_blocks": 126976, 00:10:25.004 "uuid": "2ad46966-4996-4228-b5a3-f8428fe06ccf", 00:10:25.004 "assigned_rate_limits": { 00:10:25.004 "rw_ios_per_sec": 0, 00:10:25.004 "rw_mbytes_per_sec": 0, 00:10:25.004 "r_mbytes_per_sec": 0, 00:10:25.004 "w_mbytes_per_sec": 0 00:10:25.004 }, 00:10:25.004 "claimed": false, 00:10:25.004 "zoned": false, 00:10:25.004 "supported_io_types": { 00:10:25.004 "read": true, 00:10:25.004 "write": true, 00:10:25.004 "unmap": true, 00:10:25.004 "flush": true, 00:10:25.004 "reset": true, 00:10:25.004 "nvme_admin": false, 00:10:25.004 "nvme_io": false, 00:10:25.004 "nvme_io_md": false, 00:10:25.004 "write_zeroes": true, 00:10:25.004 "zcopy": false, 00:10:25.004 "get_zone_info": false, 00:10:25.004 "zone_management": false, 00:10:25.004 "zone_append": false, 00:10:25.004 "compare": false, 00:10:25.004 "compare_and_write": false, 00:10:25.004 "abort": false, 00:10:25.004 "seek_hole": false, 00:10:25.004 "seek_data": false, 00:10:25.004 "copy": false, 00:10:25.004 "nvme_iov_md": false 00:10:25.004 }, 00:10:25.004 "memory_domains": [ 00:10:25.004 { 00:10:25.004 "dma_device_id": "system", 00:10:25.004 "dma_device_type": 1 00:10:25.004 }, 00:10:25.004 { 00:10:25.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.004 "dma_device_type": 2 00:10:25.004 }, 00:10:25.004 { 00:10:25.004 "dma_device_id": "system", 00:10:25.004 "dma_device_type": 1 00:10:25.004 }, 00:10:25.004 { 00:10:25.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.004 "dma_device_type": 2 00:10:25.004 } 00:10:25.004 ], 00:10:25.004 "driver_specific": { 00:10:25.004 "raid": { 00:10:25.004 "uuid": "2ad46966-4996-4228-b5a3-f8428fe06ccf", 00:10:25.004 "strip_size_kb": 64, 00:10:25.004 "state": "online", 00:10:25.004 "raid_level": "concat", 00:10:25.004 "superblock": true, 00:10:25.004 "num_base_bdevs": 2, 00:10:25.004 "num_base_bdevs_discovered": 2, 00:10:25.004 "num_base_bdevs_operational": 2, 00:10:25.004 "base_bdevs_list": [ 00:10:25.004 { 00:10:25.004 "name": "BaseBdev1", 00:10:25.004 "uuid": "3e4b894c-f790-4f14-a35a-8c12406b6f7b", 00:10:25.004 "is_configured": true, 00:10:25.004 "data_offset": 2048, 00:10:25.004 "data_size": 63488 00:10:25.004 }, 00:10:25.004 { 00:10:25.004 "name": "BaseBdev2", 00:10:25.004 "uuid": "f3428af7-6efd-4bff-af24-5dc0ff7e5336", 00:10:25.004 "is_configured": true, 00:10:25.004 "data_offset": 2048, 00:10:25.004 "data_size": 63488 00:10:25.004 } 00:10:25.004 ] 00:10:25.004 } 00:10:25.004 } 00:10:25.004 }' 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:25.004 BaseBdev2' 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.004 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.004 [2024-11-27 08:42:21.740665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:25.004 [2024-11-27 08:42:21.740714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.004 [2024-11-27 08:42:21.740790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.263 "name": "Existed_Raid", 00:10:25.263 "uuid": "2ad46966-4996-4228-b5a3-f8428fe06ccf", 00:10:25.263 "strip_size_kb": 64, 00:10:25.263 "state": "offline", 00:10:25.263 "raid_level": "concat", 00:10:25.263 "superblock": true, 00:10:25.263 "num_base_bdevs": 2, 00:10:25.263 "num_base_bdevs_discovered": 1, 00:10:25.263 "num_base_bdevs_operational": 1, 00:10:25.263 "base_bdevs_list": [ 00:10:25.263 { 00:10:25.263 "name": null, 00:10:25.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.263 "is_configured": false, 00:10:25.263 "data_offset": 0, 00:10:25.263 "data_size": 63488 00:10:25.263 }, 00:10:25.263 { 00:10:25.263 "name": "BaseBdev2", 00:10:25.263 "uuid": "f3428af7-6efd-4bff-af24-5dc0ff7e5336", 00:10:25.263 "is_configured": true, 00:10:25.263 "data_offset": 2048, 00:10:25.263 "data_size": 63488 00:10:25.263 } 00:10:25.263 ] 00:10:25.263 }' 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.263 08:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.833 [2024-11-27 08:42:22.419483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:25.833 [2024-11-27 08:42:22.419798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62012 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' -z 62012 ']' 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # kill -0 62012 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # uname 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:25.833 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 62012 00:10:26.092 killing process with pid 62012 00:10:26.092 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:10:26.092 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:10:26.092 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # echo 'killing process with pid 62012' 00:10:26.092 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # kill 62012 00:10:26.092 [2024-11-27 08:42:22.603927] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.092 08:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@975 -- # wait 62012 00:10:26.092 [2024-11-27 08:42:22.619809] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.031 08:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:27.031 00:10:27.031 real 0m5.638s 00:10:27.031 user 0m8.417s 00:10:27.031 sys 0m0.818s 00:10:27.031 08:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:27.031 ************************************ 00:10:27.031 END TEST raid_state_function_test_sb 00:10:27.031 ************************************ 00:10:27.031 08:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.297 08:42:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:27.297 08:42:23 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:10:27.297 08:42:23 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:27.297 08:42:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.297 ************************************ 00:10:27.297 START TEST raid_superblock_test 00:10:27.297 ************************************ 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # raid_superblock_test concat 2 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:27.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62268 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62268 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # '[' -z 62268 ']' 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:27.297 08:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.297 [2024-11-27 08:42:23.945503] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:10:27.297 [2024-11-27 08:42:23.945689] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62268 ] 00:10:27.567 [2024-11-27 08:42:24.138791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.567 [2024-11-27 08:42:24.301921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.826 [2024-11-27 08:42:24.529379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.826 [2024-11-27 08:42:24.529480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@865 -- # return 0 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.395 malloc1 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.395 [2024-11-27 08:42:24.986454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:28.395 [2024-11-27 08:42:24.986847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.395 [2024-11-27 08:42:24.986933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:28.395 [2024-11-27 08:42:24.987225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.395 [2024-11-27 08:42:24.990403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.395 [2024-11-27 08:42:24.990577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:28.395 pt1 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.395 08:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.395 malloc2 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.395 [2024-11-27 08:42:25.047056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:28.395 [2024-11-27 08:42:25.047169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.395 [2024-11-27 08:42:25.047225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:28.395 [2024-11-27 08:42:25.047244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.395 [2024-11-27 08:42:25.050321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.395 [2024-11-27 08:42:25.050388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:28.395 pt2 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.395 [2024-11-27 08:42:25.055280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:28.395 [2024-11-27 08:42:25.057888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:28.395 [2024-11-27 08:42:25.058126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:28.395 [2024-11-27 08:42:25.058147] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:28.395 [2024-11-27 08:42:25.058508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:28.395 [2024-11-27 08:42:25.058728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:28.395 [2024-11-27 08:42:25.058770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:28.395 [2024-11-27 08:42:25.058964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.395 "name": "raid_bdev1", 00:10:28.395 "uuid": "de8f1201-7cb7-46fb-8a6b-627a6ded40a9", 00:10:28.395 "strip_size_kb": 64, 00:10:28.395 "state": "online", 00:10:28.395 "raid_level": "concat", 00:10:28.395 "superblock": true, 00:10:28.395 "num_base_bdevs": 2, 00:10:28.395 "num_base_bdevs_discovered": 2, 00:10:28.395 "num_base_bdevs_operational": 2, 00:10:28.395 "base_bdevs_list": [ 00:10:28.395 { 00:10:28.395 "name": "pt1", 00:10:28.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:28.395 "is_configured": true, 00:10:28.395 "data_offset": 2048, 00:10:28.395 "data_size": 63488 00:10:28.395 }, 00:10:28.395 { 00:10:28.395 "name": "pt2", 00:10:28.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.395 "is_configured": true, 00:10:28.395 "data_offset": 2048, 00:10:28.395 "data_size": 63488 00:10:28.395 } 00:10:28.395 ] 00:10:28.395 }' 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.395 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.963 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:28.963 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:28.963 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.963 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.963 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.963 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.963 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:28.963 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.963 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.963 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.963 [2024-11-27 08:42:25.607781] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.963 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.963 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:28.963 "name": "raid_bdev1", 00:10:28.963 "aliases": [ 00:10:28.963 "de8f1201-7cb7-46fb-8a6b-627a6ded40a9" 00:10:28.963 ], 00:10:28.963 "product_name": "Raid Volume", 00:10:28.963 "block_size": 512, 00:10:28.963 "num_blocks": 126976, 00:10:28.963 "uuid": "de8f1201-7cb7-46fb-8a6b-627a6ded40a9", 00:10:28.963 "assigned_rate_limits": { 00:10:28.963 "rw_ios_per_sec": 0, 00:10:28.963 "rw_mbytes_per_sec": 0, 00:10:28.963 "r_mbytes_per_sec": 0, 00:10:28.963 "w_mbytes_per_sec": 0 00:10:28.963 }, 00:10:28.963 "claimed": false, 00:10:28.963 "zoned": false, 00:10:28.963 "supported_io_types": { 00:10:28.963 "read": true, 00:10:28.963 "write": true, 00:10:28.963 "unmap": true, 00:10:28.963 "flush": true, 00:10:28.963 "reset": true, 00:10:28.963 "nvme_admin": false, 00:10:28.963 "nvme_io": false, 00:10:28.963 "nvme_io_md": false, 00:10:28.963 "write_zeroes": true, 00:10:28.963 "zcopy": false, 00:10:28.963 "get_zone_info": false, 00:10:28.963 "zone_management": false, 00:10:28.963 "zone_append": false, 00:10:28.963 "compare": false, 00:10:28.963 "compare_and_write": false, 00:10:28.963 "abort": false, 00:10:28.963 "seek_hole": false, 00:10:28.963 "seek_data": false, 00:10:28.963 "copy": false, 00:10:28.963 "nvme_iov_md": false 00:10:28.963 }, 00:10:28.963 "memory_domains": [ 00:10:28.963 { 00:10:28.963 "dma_device_id": "system", 00:10:28.963 "dma_device_type": 1 00:10:28.963 }, 00:10:28.963 { 00:10:28.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.963 "dma_device_type": 2 00:10:28.963 }, 00:10:28.963 { 00:10:28.963 "dma_device_id": "system", 00:10:28.963 "dma_device_type": 1 00:10:28.963 }, 00:10:28.963 { 00:10:28.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.963 "dma_device_type": 2 00:10:28.963 } 00:10:28.963 ], 00:10:28.963 "driver_specific": { 00:10:28.963 "raid": { 00:10:28.963 "uuid": "de8f1201-7cb7-46fb-8a6b-627a6ded40a9", 00:10:28.963 "strip_size_kb": 64, 00:10:28.963 "state": "online", 00:10:28.963 "raid_level": "concat", 00:10:28.963 "superblock": true, 00:10:28.963 "num_base_bdevs": 2, 00:10:28.963 "num_base_bdevs_discovered": 2, 00:10:28.963 "num_base_bdevs_operational": 2, 00:10:28.963 "base_bdevs_list": [ 00:10:28.963 { 00:10:28.963 "name": "pt1", 00:10:28.963 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:28.963 "is_configured": true, 00:10:28.963 "data_offset": 2048, 00:10:28.963 "data_size": 63488 00:10:28.963 }, 00:10:28.963 { 00:10:28.963 "name": "pt2", 00:10:28.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.963 "is_configured": true, 00:10:28.963 "data_offset": 2048, 00:10:28.963 "data_size": 63488 00:10:28.963 } 00:10:28.963 ] 00:10:28.963 } 00:10:28.963 } 00:10:28.963 }' 00:10:28.963 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:28.963 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:28.963 pt2' 00:10:28.963 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.222 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.222 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.222 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:29.222 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.222 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.223 [2024-11-27 08:42:25.879823] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=de8f1201-7cb7-46fb-8a6b-627a6ded40a9 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z de8f1201-7cb7-46fb-8a6b-627a6ded40a9 ']' 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.223 [2024-11-27 08:42:25.935424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:29.223 [2024-11-27 08:42:25.935459] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.223 [2024-11-27 08:42:25.935580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.223 [2024-11-27 08:42:25.935654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.223 [2024-11-27 08:42:25.935687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:29.223 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.481 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:29.482 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:29.482 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:29.482 08:42:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:29.482 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.482 08:42:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.482 [2024-11-27 08:42:26.075524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:29.482 [2024-11-27 08:42:26.078366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:29.482 [2024-11-27 08:42:26.078730] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:29.482 [2024-11-27 08:42:26.078830] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:29.482 [2024-11-27 08:42:26.078859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:29.482 [2024-11-27 08:42:26.078876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:29.482 request: 00:10:29.482 { 00:10:29.482 "name": "raid_bdev1", 00:10:29.482 "raid_level": "concat", 00:10:29.482 "base_bdevs": [ 00:10:29.482 "malloc1", 00:10:29.482 "malloc2" 00:10:29.482 ], 00:10:29.482 "strip_size_kb": 64, 00:10:29.482 "superblock": false, 00:10:29.482 "method": "bdev_raid_create", 00:10:29.482 "req_id": 1 00:10:29.482 } 00:10:29.482 Got JSON-RPC error response 00:10:29.482 response: 00:10:29.482 { 00:10:29.482 "code": -17, 00:10:29.482 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:29.482 } 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.482 [2024-11-27 08:42:26.139618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:29.482 [2024-11-27 08:42:26.139885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.482 [2024-11-27 08:42:26.139964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:29.482 [2024-11-27 08:42:26.140088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.482 [2024-11-27 08:42:26.143323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.482 [2024-11-27 08:42:26.143516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:29.482 [2024-11-27 08:42:26.143794] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:29.482 [2024-11-27 08:42:26.143987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:29.482 pt1 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.482 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.483 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.483 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.483 "name": "raid_bdev1", 00:10:29.483 "uuid": "de8f1201-7cb7-46fb-8a6b-627a6ded40a9", 00:10:29.483 "strip_size_kb": 64, 00:10:29.483 "state": "configuring", 00:10:29.483 "raid_level": "concat", 00:10:29.483 "superblock": true, 00:10:29.483 "num_base_bdevs": 2, 00:10:29.483 "num_base_bdevs_discovered": 1, 00:10:29.483 "num_base_bdevs_operational": 2, 00:10:29.483 "base_bdevs_list": [ 00:10:29.483 { 00:10:29.483 "name": "pt1", 00:10:29.483 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:29.483 "is_configured": true, 00:10:29.483 "data_offset": 2048, 00:10:29.483 "data_size": 63488 00:10:29.483 }, 00:10:29.483 { 00:10:29.483 "name": null, 00:10:29.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.483 "is_configured": false, 00:10:29.483 "data_offset": 2048, 00:10:29.483 "data_size": 63488 00:10:29.483 } 00:10:29.483 ] 00:10:29.483 }' 00:10:29.483 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.483 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.051 [2024-11-27 08:42:26.712141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:30.051 [2024-11-27 08:42:26.712296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.051 [2024-11-27 08:42:26.712351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:30.051 [2024-11-27 08:42:26.712376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.051 [2024-11-27 08:42:26.713079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.051 [2024-11-27 08:42:26.713119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:30.051 [2024-11-27 08:42:26.713236] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:30.051 [2024-11-27 08:42:26.713285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:30.051 [2024-11-27 08:42:26.713470] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:30.051 [2024-11-27 08:42:26.713503] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:30.051 [2024-11-27 08:42:26.713842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:30.051 [2024-11-27 08:42:26.714060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:30.051 [2024-11-27 08:42:26.714078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:30.051 [2024-11-27 08:42:26.714297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.051 pt2 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.051 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.051 "name": "raid_bdev1", 00:10:30.051 "uuid": "de8f1201-7cb7-46fb-8a6b-627a6ded40a9", 00:10:30.051 "strip_size_kb": 64, 00:10:30.051 "state": "online", 00:10:30.051 "raid_level": "concat", 00:10:30.051 "superblock": true, 00:10:30.051 "num_base_bdevs": 2, 00:10:30.051 "num_base_bdevs_discovered": 2, 00:10:30.051 "num_base_bdevs_operational": 2, 00:10:30.051 "base_bdevs_list": [ 00:10:30.051 { 00:10:30.051 "name": "pt1", 00:10:30.051 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.051 "is_configured": true, 00:10:30.051 "data_offset": 2048, 00:10:30.051 "data_size": 63488 00:10:30.051 }, 00:10:30.051 { 00:10:30.052 "name": "pt2", 00:10:30.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.052 "is_configured": true, 00:10:30.052 "data_offset": 2048, 00:10:30.052 "data_size": 63488 00:10:30.052 } 00:10:30.052 ] 00:10:30.052 }' 00:10:30.052 08:42:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.052 08:42:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.619 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:30.619 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:30.619 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:30.619 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:30.619 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:30.619 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:30.619 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:30.619 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.619 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.619 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:30.619 [2024-11-27 08:42:27.260606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.619 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.619 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:30.619 "name": "raid_bdev1", 00:10:30.619 "aliases": [ 00:10:30.619 "de8f1201-7cb7-46fb-8a6b-627a6ded40a9" 00:10:30.619 ], 00:10:30.619 "product_name": "Raid Volume", 00:10:30.619 "block_size": 512, 00:10:30.619 "num_blocks": 126976, 00:10:30.619 "uuid": "de8f1201-7cb7-46fb-8a6b-627a6ded40a9", 00:10:30.619 "assigned_rate_limits": { 00:10:30.619 "rw_ios_per_sec": 0, 00:10:30.619 "rw_mbytes_per_sec": 0, 00:10:30.619 "r_mbytes_per_sec": 0, 00:10:30.619 "w_mbytes_per_sec": 0 00:10:30.619 }, 00:10:30.619 "claimed": false, 00:10:30.619 "zoned": false, 00:10:30.619 "supported_io_types": { 00:10:30.619 "read": true, 00:10:30.619 "write": true, 00:10:30.619 "unmap": true, 00:10:30.619 "flush": true, 00:10:30.619 "reset": true, 00:10:30.619 "nvme_admin": false, 00:10:30.619 "nvme_io": false, 00:10:30.619 "nvme_io_md": false, 00:10:30.619 "write_zeroes": true, 00:10:30.619 "zcopy": false, 00:10:30.619 "get_zone_info": false, 00:10:30.619 "zone_management": false, 00:10:30.619 "zone_append": false, 00:10:30.619 "compare": false, 00:10:30.619 "compare_and_write": false, 00:10:30.619 "abort": false, 00:10:30.619 "seek_hole": false, 00:10:30.619 "seek_data": false, 00:10:30.619 "copy": false, 00:10:30.619 "nvme_iov_md": false 00:10:30.619 }, 00:10:30.619 "memory_domains": [ 00:10:30.619 { 00:10:30.619 "dma_device_id": "system", 00:10:30.619 "dma_device_type": 1 00:10:30.619 }, 00:10:30.619 { 00:10:30.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.619 "dma_device_type": 2 00:10:30.619 }, 00:10:30.619 { 00:10:30.619 "dma_device_id": "system", 00:10:30.619 "dma_device_type": 1 00:10:30.619 }, 00:10:30.619 { 00:10:30.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.619 "dma_device_type": 2 00:10:30.619 } 00:10:30.619 ], 00:10:30.619 "driver_specific": { 00:10:30.619 "raid": { 00:10:30.619 "uuid": "de8f1201-7cb7-46fb-8a6b-627a6ded40a9", 00:10:30.619 "strip_size_kb": 64, 00:10:30.619 "state": "online", 00:10:30.619 "raid_level": "concat", 00:10:30.619 "superblock": true, 00:10:30.619 "num_base_bdevs": 2, 00:10:30.619 "num_base_bdevs_discovered": 2, 00:10:30.619 "num_base_bdevs_operational": 2, 00:10:30.619 "base_bdevs_list": [ 00:10:30.619 { 00:10:30.619 "name": "pt1", 00:10:30.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.619 "is_configured": true, 00:10:30.619 "data_offset": 2048, 00:10:30.619 "data_size": 63488 00:10:30.619 }, 00:10:30.619 { 00:10:30.619 "name": "pt2", 00:10:30.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.619 "is_configured": true, 00:10:30.619 "data_offset": 2048, 00:10:30.619 "data_size": 63488 00:10:30.619 } 00:10:30.619 ] 00:10:30.619 } 00:10:30.619 } 00:10:30.619 }' 00:10:30.619 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:30.619 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:30.619 pt2' 00:10:30.619 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.878 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.879 [2024-11-27 08:42:27.528554] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' de8f1201-7cb7-46fb-8a6b-627a6ded40a9 '!=' de8f1201-7cb7-46fb-8a6b-627a6ded40a9 ']' 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62268 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' -z 62268 ']' 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # kill -0 62268 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # uname 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 62268 00:10:30.879 killing process with pid 62268 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 62268' 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # kill 62268 00:10:30.879 [2024-11-27 08:42:27.617877] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.879 08:42:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@975 -- # wait 62268 00:10:30.879 [2024-11-27 08:42:27.618015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.879 [2024-11-27 08:42:27.618105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.879 [2024-11-27 08:42:27.618140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:31.137 [2024-11-27 08:42:27.815618] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.514 08:42:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:32.514 00:10:32.514 real 0m5.110s 00:10:32.514 user 0m7.449s 00:10:32.514 sys 0m0.822s 00:10:32.514 08:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:32.514 08:42:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.514 ************************************ 00:10:32.514 END TEST raid_superblock_test 00:10:32.514 ************************************ 00:10:32.514 08:42:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:10:32.514 08:42:28 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:10:32.514 08:42:28 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:32.514 08:42:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.514 ************************************ 00:10:32.514 START TEST raid_read_error_test 00:10:32.514 ************************************ 00:10:32.514 08:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test concat 2 read 00:10:32.514 08:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:32.514 08:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:32.514 08:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:32.514 08:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:32.514 08:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.514 08:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:32.514 08:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:32.514 08:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.514 08:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:32.514 08:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:32.514 08:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.514 08:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1ZOaf7fXVz 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62483 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62483 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # '[' -z 62483 ']' 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:32.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:32.514 08:42:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.514 [2024-11-27 08:42:29.131791] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:10:32.514 [2024-11-27 08:42:29.131991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62483 ] 00:10:32.772 [2024-11-27 08:42:29.323649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.772 [2024-11-27 08:42:29.471144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.031 [2024-11-27 08:42:29.695464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.031 [2024-11-27 08:42:29.695588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@865 -- # return 0 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.599 BaseBdev1_malloc 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.599 true 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.599 [2024-11-27 08:42:30.216273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:33.599 [2024-11-27 08:42:30.216385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.599 [2024-11-27 08:42:30.216424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:33.599 [2024-11-27 08:42:30.216445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.599 [2024-11-27 08:42:30.219642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.599 [2024-11-27 08:42:30.219697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:33.599 BaseBdev1 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.599 BaseBdev2_malloc 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.599 true 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.599 [2024-11-27 08:42:30.276834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:33.599 [2024-11-27 08:42:30.276933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.599 [2024-11-27 08:42:30.276963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:33.599 [2024-11-27 08:42:30.276982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.599 [2024-11-27 08:42:30.280143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.599 [2024-11-27 08:42:30.280199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:33.599 BaseBdev2 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.599 [2024-11-27 08:42:30.285004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.599 [2024-11-27 08:42:30.287824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.599 [2024-11-27 08:42:30.288158] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:33.599 [2024-11-27 08:42:30.288195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:33.599 [2024-11-27 08:42:30.288595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:33.599 [2024-11-27 08:42:30.288874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:33.599 [2024-11-27 08:42:30.288905] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:33.599 [2024-11-27 08:42:30.289212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.599 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.600 "name": "raid_bdev1", 00:10:33.600 "uuid": "c3c10f1f-37eb-4965-bacc-ee90d69716c5", 00:10:33.600 "strip_size_kb": 64, 00:10:33.600 "state": "online", 00:10:33.600 "raid_level": "concat", 00:10:33.600 "superblock": true, 00:10:33.600 "num_base_bdevs": 2, 00:10:33.600 "num_base_bdevs_discovered": 2, 00:10:33.600 "num_base_bdevs_operational": 2, 00:10:33.600 "base_bdevs_list": [ 00:10:33.600 { 00:10:33.600 "name": "BaseBdev1", 00:10:33.600 "uuid": "477bbc28-6d91-5525-9384-8d75309e4edb", 00:10:33.600 "is_configured": true, 00:10:33.600 "data_offset": 2048, 00:10:33.600 "data_size": 63488 00:10:33.600 }, 00:10:33.600 { 00:10:33.600 "name": "BaseBdev2", 00:10:33.600 "uuid": "9a1c0bab-b2cc-5bab-bbb1-3674fd83b71b", 00:10:33.600 "is_configured": true, 00:10:33.600 "data_offset": 2048, 00:10:33.600 "data_size": 63488 00:10:33.600 } 00:10:33.600 ] 00:10:33.600 }' 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.600 08:42:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.167 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:34.167 08:42:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:34.167 [2024-11-27 08:42:30.910874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.105 08:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.364 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.364 "name": "raid_bdev1", 00:10:35.364 "uuid": "c3c10f1f-37eb-4965-bacc-ee90d69716c5", 00:10:35.364 "strip_size_kb": 64, 00:10:35.364 "state": "online", 00:10:35.364 "raid_level": "concat", 00:10:35.364 "superblock": true, 00:10:35.364 "num_base_bdevs": 2, 00:10:35.364 "num_base_bdevs_discovered": 2, 00:10:35.364 "num_base_bdevs_operational": 2, 00:10:35.364 "base_bdevs_list": [ 00:10:35.364 { 00:10:35.364 "name": "BaseBdev1", 00:10:35.364 "uuid": "477bbc28-6d91-5525-9384-8d75309e4edb", 00:10:35.364 "is_configured": true, 00:10:35.364 "data_offset": 2048, 00:10:35.364 "data_size": 63488 00:10:35.364 }, 00:10:35.364 { 00:10:35.364 "name": "BaseBdev2", 00:10:35.364 "uuid": "9a1c0bab-b2cc-5bab-bbb1-3674fd83b71b", 00:10:35.364 "is_configured": true, 00:10:35.364 "data_offset": 2048, 00:10:35.364 "data_size": 63488 00:10:35.364 } 00:10:35.364 ] 00:10:35.364 }' 00:10:35.364 08:42:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.364 08:42:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.623 08:42:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:35.623 08:42:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.623 08:42:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.623 [2024-11-27 08:42:32.364722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:35.623 [2024-11-27 08:42:32.364776] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.623 [2024-11-27 08:42:32.368176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.623 [2024-11-27 08:42:32.368245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.623 [2024-11-27 08:42:32.368297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.623 [2024-11-27 08:42:32.368321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:35.623 { 00:10:35.623 "results": [ 00:10:35.623 { 00:10:35.623 "job": "raid_bdev1", 00:10:35.623 "core_mask": "0x1", 00:10:35.623 "workload": "randrw", 00:10:35.623 "percentage": 50, 00:10:35.623 "status": "finished", 00:10:35.623 "queue_depth": 1, 00:10:35.623 "io_size": 131072, 00:10:35.623 "runtime": 1.451165, 00:10:35.623 "iops": 9803.847253758187, 00:10:35.623 "mibps": 1225.4809067197734, 00:10:35.623 "io_failed": 1, 00:10:35.623 "io_timeout": 0, 00:10:35.623 "avg_latency_us": 143.67319549160428, 00:10:35.623 "min_latency_us": 41.192727272727275, 00:10:35.623 "max_latency_us": 2055.447272727273 00:10:35.623 } 00:10:35.623 ], 00:10:35.623 "core_count": 1 00:10:35.623 } 00:10:35.623 08:42:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.623 08:42:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62483 00:10:35.623 08:42:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' -z 62483 ']' 00:10:35.623 08:42:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # kill -0 62483 00:10:35.623 08:42:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # uname 00:10:35.623 08:42:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:35.623 08:42:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 62483 00:10:35.883 08:42:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:10:35.883 08:42:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:10:35.883 killing process with pid 62483 00:10:35.883 08:42:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 62483' 00:10:35.883 08:42:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # kill 62483 00:10:35.883 08:42:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@975 -- # wait 62483 00:10:35.883 [2024-11-27 08:42:32.407426] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.883 [2024-11-27 08:42:32.541787] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.261 08:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1ZOaf7fXVz 00:10:37.261 08:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:37.261 08:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:37.261 08:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:10:37.261 08:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:37.261 08:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:37.261 08:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:37.261 08:42:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:10:37.261 00:10:37.261 real 0m4.756s 00:10:37.261 user 0m5.879s 00:10:37.261 sys 0m0.656s 00:10:37.261 08:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:37.261 08:42:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.261 ************************************ 00:10:37.261 END TEST raid_read_error_test 00:10:37.261 ************************************ 00:10:37.261 08:42:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:10:37.261 08:42:33 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:10:37.261 08:42:33 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:37.261 08:42:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.261 ************************************ 00:10:37.261 START TEST raid_write_error_test 00:10:37.261 ************************************ 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test concat 2 write 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.97n5QCyrBz 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62634 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62634 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # '[' -z 62634 ']' 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:37.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:37.261 08:42:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.261 [2024-11-27 08:42:33.931277] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:10:37.261 [2024-11-27 08:42:33.931517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62634 ] 00:10:37.520 [2024-11-27 08:42:34.117234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.520 [2024-11-27 08:42:34.268993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.777 [2024-11-27 08:42:34.490873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.777 [2024-11-27 08:42:34.491026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.343 08:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:38.343 08:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@865 -- # return 0 00:10:38.343 08:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:38.343 08:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:38.343 08:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.343 08:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.343 BaseBdev1_malloc 00:10:38.343 08:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.343 08:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:38.343 08:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.343 08:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.343 true 00:10:38.343 08:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.343 08:42:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:38.343 08:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.343 08:42:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.343 [2024-11-27 08:42:35.003081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:38.343 [2024-11-27 08:42:35.003214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.343 [2024-11-27 08:42:35.003250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:38.343 [2024-11-27 08:42:35.003270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.343 [2024-11-27 08:42:35.006472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.343 [2024-11-27 08:42:35.006527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:38.343 BaseBdev1 00:10:38.343 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.343 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:38.343 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:38.343 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.343 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.343 BaseBdev2_malloc 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.344 true 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.344 [2024-11-27 08:42:35.072073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:38.344 [2024-11-27 08:42:35.072192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.344 [2024-11-27 08:42:35.072225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:38.344 [2024-11-27 08:42:35.072244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.344 [2024-11-27 08:42:35.075525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.344 [2024-11-27 08:42:35.075575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:38.344 BaseBdev2 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.344 [2024-11-27 08:42:35.084420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.344 [2024-11-27 08:42:35.087289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.344 [2024-11-27 08:42:35.087669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:38.344 [2024-11-27 08:42:35.087704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:38.344 [2024-11-27 08:42:35.088091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:38.344 [2024-11-27 08:42:35.088495] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:38.344 [2024-11-27 08:42:35.088527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:38.344 [2024-11-27 08:42:35.088815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.344 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.602 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.602 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.602 "name": "raid_bdev1", 00:10:38.602 "uuid": "6c39122b-5faf-49da-bd19-e14f7c388eee", 00:10:38.602 "strip_size_kb": 64, 00:10:38.602 "state": "online", 00:10:38.602 "raid_level": "concat", 00:10:38.602 "superblock": true, 00:10:38.602 "num_base_bdevs": 2, 00:10:38.602 "num_base_bdevs_discovered": 2, 00:10:38.602 "num_base_bdevs_operational": 2, 00:10:38.602 "base_bdevs_list": [ 00:10:38.602 { 00:10:38.602 "name": "BaseBdev1", 00:10:38.602 "uuid": "8496b4ee-6a81-5648-aec4-159fb0029073", 00:10:38.602 "is_configured": true, 00:10:38.602 "data_offset": 2048, 00:10:38.602 "data_size": 63488 00:10:38.602 }, 00:10:38.602 { 00:10:38.602 "name": "BaseBdev2", 00:10:38.602 "uuid": "fb971638-6aa5-533b-94c7-f46c868f7bbc", 00:10:38.602 "is_configured": true, 00:10:38.602 "data_offset": 2048, 00:10:38.602 "data_size": 63488 00:10:38.602 } 00:10:38.602 ] 00:10:38.602 }' 00:10:38.602 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.602 08:42:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.168 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:39.168 08:42:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:39.168 [2024-11-27 08:42:35.758908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:40.102 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:40.102 08:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.102 08:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.102 08:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.102 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:40.102 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:40.102 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:40.102 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:40.102 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.102 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.102 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.103 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.103 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.103 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.103 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.103 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.103 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.103 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.103 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.103 08:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.103 08:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.103 08:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.103 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.103 "name": "raid_bdev1", 00:10:40.103 "uuid": "6c39122b-5faf-49da-bd19-e14f7c388eee", 00:10:40.103 "strip_size_kb": 64, 00:10:40.103 "state": "online", 00:10:40.103 "raid_level": "concat", 00:10:40.103 "superblock": true, 00:10:40.103 "num_base_bdevs": 2, 00:10:40.103 "num_base_bdevs_discovered": 2, 00:10:40.103 "num_base_bdevs_operational": 2, 00:10:40.103 "base_bdevs_list": [ 00:10:40.103 { 00:10:40.103 "name": "BaseBdev1", 00:10:40.103 "uuid": "8496b4ee-6a81-5648-aec4-159fb0029073", 00:10:40.103 "is_configured": true, 00:10:40.103 "data_offset": 2048, 00:10:40.103 "data_size": 63488 00:10:40.103 }, 00:10:40.103 { 00:10:40.103 "name": "BaseBdev2", 00:10:40.103 "uuid": "fb971638-6aa5-533b-94c7-f46c868f7bbc", 00:10:40.103 "is_configured": true, 00:10:40.103 "data_offset": 2048, 00:10:40.103 "data_size": 63488 00:10:40.103 } 00:10:40.103 ] 00:10:40.103 }' 00:10:40.103 08:42:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.103 08:42:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.668 08:42:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:40.668 08:42:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.668 08:42:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.668 [2024-11-27 08:42:37.187594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:40.668 [2024-11-27 08:42:37.187670] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.668 [2024-11-27 08:42:37.191193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.669 [2024-11-27 08:42:37.191274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.669 [2024-11-27 08:42:37.191326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.669 [2024-11-27 08:42:37.191377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:40.669 { 00:10:40.669 "results": [ 00:10:40.669 { 00:10:40.669 "job": "raid_bdev1", 00:10:40.669 "core_mask": "0x1", 00:10:40.669 "workload": "randrw", 00:10:40.669 "percentage": 50, 00:10:40.669 "status": "finished", 00:10:40.669 "queue_depth": 1, 00:10:40.669 "io_size": 131072, 00:10:40.669 "runtime": 1.426092, 00:10:40.669 "iops": 9421.55204573057, 00:10:40.669 "mibps": 1177.6940057163213, 00:10:40.669 "io_failed": 1, 00:10:40.669 "io_timeout": 0, 00:10:40.669 "avg_latency_us": 149.54903516071633, 00:10:40.669 "min_latency_us": 38.63272727272727, 00:10:40.669 "max_latency_us": 1995.8690909090908 00:10:40.669 } 00:10:40.669 ], 00:10:40.669 "core_count": 1 00:10:40.669 } 00:10:40.669 08:42:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.669 08:42:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62634 00:10:40.669 08:42:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' -z 62634 ']' 00:10:40.669 08:42:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # kill -0 62634 00:10:40.669 08:42:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # uname 00:10:40.669 08:42:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:40.669 08:42:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 62634 00:10:40.669 killing process with pid 62634 00:10:40.669 08:42:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:10:40.669 08:42:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:10:40.669 08:42:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 62634' 00:10:40.669 08:42:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # kill 62634 00:10:40.669 08:42:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@975 -- # wait 62634 00:10:40.669 [2024-11-27 08:42:37.235146] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:40.669 [2024-11-27 08:42:37.371512] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.094 08:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.97n5QCyrBz 00:10:42.094 08:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:42.094 08:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:42.094 08:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:42.094 08:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:42.094 08:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:42.094 08:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:42.094 08:42:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:42.094 00:10:42.094 real 0m4.754s 00:10:42.094 user 0m5.909s 00:10:42.094 sys 0m0.658s 00:10:42.094 08:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:42.094 ************************************ 00:10:42.094 END TEST raid_write_error_test 00:10:42.094 ************************************ 00:10:42.094 08:42:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.094 08:42:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:42.094 08:42:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:42.094 08:42:38 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:10:42.094 08:42:38 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:42.094 08:42:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.094 ************************************ 00:10:42.094 START TEST raid_state_function_test 00:10:42.094 ************************************ 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # raid_state_function_test raid1 2 false 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:42.094 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:42.095 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:42.095 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:42.095 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:42.095 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:42.095 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:42.095 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62778 00:10:42.095 Process raid pid: 62778 00:10:42.095 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:42.095 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62778' 00:10:42.095 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62778 00:10:42.095 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # '[' -z 62778 ']' 00:10:42.095 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.095 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:42.095 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.095 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:42.095 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.095 [2024-11-27 08:42:38.725739] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:10:42.095 [2024-11-27 08:42:38.725977] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.353 [2024-11-27 08:42:38.913567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.353 [2024-11-27 08:42:39.062774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.614 [2024-11-27 08:42:39.296815] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.614 [2024-11-27 08:42:39.296891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@865 -- # return 0 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.181 [2024-11-27 08:42:39.736836] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.181 [2024-11-27 08:42:39.736935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.181 [2024-11-27 08:42:39.736953] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.181 [2024-11-27 08:42:39.736970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.181 "name": "Existed_Raid", 00:10:43.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.181 "strip_size_kb": 0, 00:10:43.181 "state": "configuring", 00:10:43.181 "raid_level": "raid1", 00:10:43.181 "superblock": false, 00:10:43.181 "num_base_bdevs": 2, 00:10:43.181 "num_base_bdevs_discovered": 0, 00:10:43.181 "num_base_bdevs_operational": 2, 00:10:43.181 "base_bdevs_list": [ 00:10:43.181 { 00:10:43.181 "name": "BaseBdev1", 00:10:43.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.181 "is_configured": false, 00:10:43.181 "data_offset": 0, 00:10:43.181 "data_size": 0 00:10:43.181 }, 00:10:43.181 { 00:10:43.181 "name": "BaseBdev2", 00:10:43.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.181 "is_configured": false, 00:10:43.181 "data_offset": 0, 00:10:43.181 "data_size": 0 00:10:43.181 } 00:10:43.181 ] 00:10:43.181 }' 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.181 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.749 [2024-11-27 08:42:40.260895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.749 [2024-11-27 08:42:40.260986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.749 [2024-11-27 08:42:40.268846] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.749 [2024-11-27 08:42:40.268899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.749 [2024-11-27 08:42:40.268916] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.749 [2024-11-27 08:42:40.268936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.749 [2024-11-27 08:42:40.317621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.749 BaseBdev1 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.749 [ 00:10:43.749 { 00:10:43.749 "name": "BaseBdev1", 00:10:43.749 "aliases": [ 00:10:43.749 "9a783bf3-b891-43d3-8563-2d0b97c6766c" 00:10:43.749 ], 00:10:43.749 "product_name": "Malloc disk", 00:10:43.749 "block_size": 512, 00:10:43.749 "num_blocks": 65536, 00:10:43.749 "uuid": "9a783bf3-b891-43d3-8563-2d0b97c6766c", 00:10:43.749 "assigned_rate_limits": { 00:10:43.749 "rw_ios_per_sec": 0, 00:10:43.749 "rw_mbytes_per_sec": 0, 00:10:43.749 "r_mbytes_per_sec": 0, 00:10:43.749 "w_mbytes_per_sec": 0 00:10:43.749 }, 00:10:43.749 "claimed": true, 00:10:43.749 "claim_type": "exclusive_write", 00:10:43.749 "zoned": false, 00:10:43.749 "supported_io_types": { 00:10:43.749 "read": true, 00:10:43.749 "write": true, 00:10:43.749 "unmap": true, 00:10:43.749 "flush": true, 00:10:43.749 "reset": true, 00:10:43.749 "nvme_admin": false, 00:10:43.749 "nvme_io": false, 00:10:43.749 "nvme_io_md": false, 00:10:43.749 "write_zeroes": true, 00:10:43.749 "zcopy": true, 00:10:43.749 "get_zone_info": false, 00:10:43.749 "zone_management": false, 00:10:43.749 "zone_append": false, 00:10:43.749 "compare": false, 00:10:43.749 "compare_and_write": false, 00:10:43.749 "abort": true, 00:10:43.749 "seek_hole": false, 00:10:43.749 "seek_data": false, 00:10:43.749 "copy": true, 00:10:43.749 "nvme_iov_md": false 00:10:43.749 }, 00:10:43.749 "memory_domains": [ 00:10:43.749 { 00:10:43.749 "dma_device_id": "system", 00:10:43.749 "dma_device_type": 1 00:10:43.749 }, 00:10:43.749 { 00:10:43.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.749 "dma_device_type": 2 00:10:43.749 } 00:10:43.749 ], 00:10:43.749 "driver_specific": {} 00:10:43.749 } 00:10:43.749 ] 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.749 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.749 "name": "Existed_Raid", 00:10:43.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.750 "strip_size_kb": 0, 00:10:43.750 "state": "configuring", 00:10:43.750 "raid_level": "raid1", 00:10:43.750 "superblock": false, 00:10:43.750 "num_base_bdevs": 2, 00:10:43.750 "num_base_bdevs_discovered": 1, 00:10:43.750 "num_base_bdevs_operational": 2, 00:10:43.750 "base_bdevs_list": [ 00:10:43.750 { 00:10:43.750 "name": "BaseBdev1", 00:10:43.750 "uuid": "9a783bf3-b891-43d3-8563-2d0b97c6766c", 00:10:43.750 "is_configured": true, 00:10:43.750 "data_offset": 0, 00:10:43.750 "data_size": 65536 00:10:43.750 }, 00:10:43.750 { 00:10:43.750 "name": "BaseBdev2", 00:10:43.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.750 "is_configured": false, 00:10:43.750 "data_offset": 0, 00:10:43.750 "data_size": 0 00:10:43.750 } 00:10:43.750 ] 00:10:43.750 }' 00:10:43.750 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.750 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.318 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:44.318 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.318 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.318 [2024-11-27 08:42:40.873848] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.318 [2024-11-27 08:42:40.873943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:44.318 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.318 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:44.318 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.318 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.318 [2024-11-27 08:42:40.881850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.318 [2024-11-27 08:42:40.884539] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.319 [2024-11-27 08:42:40.884598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.319 "name": "Existed_Raid", 00:10:44.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.319 "strip_size_kb": 0, 00:10:44.319 "state": "configuring", 00:10:44.319 "raid_level": "raid1", 00:10:44.319 "superblock": false, 00:10:44.319 "num_base_bdevs": 2, 00:10:44.319 "num_base_bdevs_discovered": 1, 00:10:44.319 "num_base_bdevs_operational": 2, 00:10:44.319 "base_bdevs_list": [ 00:10:44.319 { 00:10:44.319 "name": "BaseBdev1", 00:10:44.319 "uuid": "9a783bf3-b891-43d3-8563-2d0b97c6766c", 00:10:44.319 "is_configured": true, 00:10:44.319 "data_offset": 0, 00:10:44.319 "data_size": 65536 00:10:44.319 }, 00:10:44.319 { 00:10:44.319 "name": "BaseBdev2", 00:10:44.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.319 "is_configured": false, 00:10:44.319 "data_offset": 0, 00:10:44.319 "data_size": 0 00:10:44.319 } 00:10:44.319 ] 00:10:44.319 }' 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.319 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.887 [2024-11-27 08:42:41.447855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.887 [2024-11-27 08:42:41.447932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:44.887 [2024-11-27 08:42:41.447947] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:44.887 [2024-11-27 08:42:41.448310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:44.887 [2024-11-27 08:42:41.448570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:44.887 [2024-11-27 08:42:41.448606] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:44.887 [2024-11-27 08:42:41.448956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.887 BaseBdev2 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.887 [ 00:10:44.887 { 00:10:44.887 "name": "BaseBdev2", 00:10:44.887 "aliases": [ 00:10:44.887 "4ac92f10-4160-4f56-8841-dc3cbc5b3cc3" 00:10:44.887 ], 00:10:44.887 "product_name": "Malloc disk", 00:10:44.887 "block_size": 512, 00:10:44.887 "num_blocks": 65536, 00:10:44.887 "uuid": "4ac92f10-4160-4f56-8841-dc3cbc5b3cc3", 00:10:44.887 "assigned_rate_limits": { 00:10:44.887 "rw_ios_per_sec": 0, 00:10:44.887 "rw_mbytes_per_sec": 0, 00:10:44.887 "r_mbytes_per_sec": 0, 00:10:44.887 "w_mbytes_per_sec": 0 00:10:44.887 }, 00:10:44.887 "claimed": true, 00:10:44.887 "claim_type": "exclusive_write", 00:10:44.887 "zoned": false, 00:10:44.887 "supported_io_types": { 00:10:44.887 "read": true, 00:10:44.887 "write": true, 00:10:44.887 "unmap": true, 00:10:44.887 "flush": true, 00:10:44.887 "reset": true, 00:10:44.887 "nvme_admin": false, 00:10:44.887 "nvme_io": false, 00:10:44.887 "nvme_io_md": false, 00:10:44.887 "write_zeroes": true, 00:10:44.887 "zcopy": true, 00:10:44.887 "get_zone_info": false, 00:10:44.887 "zone_management": false, 00:10:44.887 "zone_append": false, 00:10:44.887 "compare": false, 00:10:44.887 "compare_and_write": false, 00:10:44.887 "abort": true, 00:10:44.887 "seek_hole": false, 00:10:44.887 "seek_data": false, 00:10:44.887 "copy": true, 00:10:44.887 "nvme_iov_md": false 00:10:44.887 }, 00:10:44.887 "memory_domains": [ 00:10:44.887 { 00:10:44.887 "dma_device_id": "system", 00:10:44.887 "dma_device_type": 1 00:10:44.887 }, 00:10:44.887 { 00:10:44.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.887 "dma_device_type": 2 00:10:44.887 } 00:10:44.887 ], 00:10:44.887 "driver_specific": {} 00:10:44.887 } 00:10:44.887 ] 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:44.887 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.888 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.888 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.888 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.888 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.888 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.888 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.888 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.888 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.888 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.888 "name": "Existed_Raid", 00:10:44.888 "uuid": "8916f54d-d9dc-41aa-b67d-3286849df209", 00:10:44.888 "strip_size_kb": 0, 00:10:44.888 "state": "online", 00:10:44.888 "raid_level": "raid1", 00:10:44.888 "superblock": false, 00:10:44.888 "num_base_bdevs": 2, 00:10:44.888 "num_base_bdevs_discovered": 2, 00:10:44.888 "num_base_bdevs_operational": 2, 00:10:44.888 "base_bdevs_list": [ 00:10:44.888 { 00:10:44.888 "name": "BaseBdev1", 00:10:44.888 "uuid": "9a783bf3-b891-43d3-8563-2d0b97c6766c", 00:10:44.888 "is_configured": true, 00:10:44.888 "data_offset": 0, 00:10:44.888 "data_size": 65536 00:10:44.888 }, 00:10:44.888 { 00:10:44.888 "name": "BaseBdev2", 00:10:44.888 "uuid": "4ac92f10-4160-4f56-8841-dc3cbc5b3cc3", 00:10:44.888 "is_configured": true, 00:10:44.888 "data_offset": 0, 00:10:44.888 "data_size": 65536 00:10:44.888 } 00:10:44.888 ] 00:10:44.888 }' 00:10:44.888 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.888 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.474 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:45.474 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:45.474 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.474 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.474 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.474 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.474 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.474 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:45.474 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.474 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.474 [2024-11-27 08:42:42.004444] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.474 "name": "Existed_Raid", 00:10:45.474 "aliases": [ 00:10:45.474 "8916f54d-d9dc-41aa-b67d-3286849df209" 00:10:45.474 ], 00:10:45.474 "product_name": "Raid Volume", 00:10:45.474 "block_size": 512, 00:10:45.474 "num_blocks": 65536, 00:10:45.474 "uuid": "8916f54d-d9dc-41aa-b67d-3286849df209", 00:10:45.474 "assigned_rate_limits": { 00:10:45.474 "rw_ios_per_sec": 0, 00:10:45.474 "rw_mbytes_per_sec": 0, 00:10:45.474 "r_mbytes_per_sec": 0, 00:10:45.474 "w_mbytes_per_sec": 0 00:10:45.474 }, 00:10:45.474 "claimed": false, 00:10:45.474 "zoned": false, 00:10:45.474 "supported_io_types": { 00:10:45.474 "read": true, 00:10:45.474 "write": true, 00:10:45.474 "unmap": false, 00:10:45.474 "flush": false, 00:10:45.474 "reset": true, 00:10:45.474 "nvme_admin": false, 00:10:45.474 "nvme_io": false, 00:10:45.474 "nvme_io_md": false, 00:10:45.474 "write_zeroes": true, 00:10:45.474 "zcopy": false, 00:10:45.474 "get_zone_info": false, 00:10:45.474 "zone_management": false, 00:10:45.474 "zone_append": false, 00:10:45.474 "compare": false, 00:10:45.474 "compare_and_write": false, 00:10:45.474 "abort": false, 00:10:45.474 "seek_hole": false, 00:10:45.474 "seek_data": false, 00:10:45.474 "copy": false, 00:10:45.474 "nvme_iov_md": false 00:10:45.474 }, 00:10:45.474 "memory_domains": [ 00:10:45.474 { 00:10:45.474 "dma_device_id": "system", 00:10:45.474 "dma_device_type": 1 00:10:45.474 }, 00:10:45.474 { 00:10:45.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.474 "dma_device_type": 2 00:10:45.474 }, 00:10:45.474 { 00:10:45.474 "dma_device_id": "system", 00:10:45.474 "dma_device_type": 1 00:10:45.474 }, 00:10:45.474 { 00:10:45.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.474 "dma_device_type": 2 00:10:45.474 } 00:10:45.474 ], 00:10:45.474 "driver_specific": { 00:10:45.474 "raid": { 00:10:45.474 "uuid": "8916f54d-d9dc-41aa-b67d-3286849df209", 00:10:45.474 "strip_size_kb": 0, 00:10:45.474 "state": "online", 00:10:45.474 "raid_level": "raid1", 00:10:45.474 "superblock": false, 00:10:45.474 "num_base_bdevs": 2, 00:10:45.474 "num_base_bdevs_discovered": 2, 00:10:45.474 "num_base_bdevs_operational": 2, 00:10:45.474 "base_bdevs_list": [ 00:10:45.474 { 00:10:45.474 "name": "BaseBdev1", 00:10:45.474 "uuid": "9a783bf3-b891-43d3-8563-2d0b97c6766c", 00:10:45.474 "is_configured": true, 00:10:45.474 "data_offset": 0, 00:10:45.474 "data_size": 65536 00:10:45.474 }, 00:10:45.474 { 00:10:45.474 "name": "BaseBdev2", 00:10:45.474 "uuid": "4ac92f10-4160-4f56-8841-dc3cbc5b3cc3", 00:10:45.474 "is_configured": true, 00:10:45.474 "data_offset": 0, 00:10:45.474 "data_size": 65536 00:10:45.474 } 00:10:45.474 ] 00:10:45.474 } 00:10:45.474 } 00:10:45.474 }' 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:45.474 BaseBdev2' 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.474 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.733 [2024-11-27 08:42:42.244134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.733 "name": "Existed_Raid", 00:10:45.733 "uuid": "8916f54d-d9dc-41aa-b67d-3286849df209", 00:10:45.733 "strip_size_kb": 0, 00:10:45.733 "state": "online", 00:10:45.733 "raid_level": "raid1", 00:10:45.733 "superblock": false, 00:10:45.733 "num_base_bdevs": 2, 00:10:45.733 "num_base_bdevs_discovered": 1, 00:10:45.733 "num_base_bdevs_operational": 1, 00:10:45.733 "base_bdevs_list": [ 00:10:45.733 { 00:10:45.733 "name": null, 00:10:45.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.733 "is_configured": false, 00:10:45.733 "data_offset": 0, 00:10:45.733 "data_size": 65536 00:10:45.733 }, 00:10:45.733 { 00:10:45.733 "name": "BaseBdev2", 00:10:45.733 "uuid": "4ac92f10-4160-4f56-8841-dc3cbc5b3cc3", 00:10:45.733 "is_configured": true, 00:10:45.733 "data_offset": 0, 00:10:45.733 "data_size": 65536 00:10:45.733 } 00:10:45.733 ] 00:10:45.733 }' 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.733 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.301 [2024-11-27 08:42:42.892458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.301 [2024-11-27 08:42:42.892617] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.301 [2024-11-27 08:42:42.988826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.301 [2024-11-27 08:42:42.988928] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.301 [2024-11-27 08:42:42.988953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.301 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.301 08:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.301 08:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:46.301 08:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:46.301 08:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:46.301 08:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62778 00:10:46.301 08:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' -z 62778 ']' 00:10:46.301 08:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # kill -0 62778 00:10:46.301 08:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # uname 00:10:46.301 08:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:46.301 08:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 62778 00:10:46.559 killing process with pid 62778 00:10:46.559 08:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:10:46.559 08:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:10:46.559 08:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 62778' 00:10:46.559 08:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # kill 62778 00:10:46.560 08:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@975 -- # wait 62778 00:10:46.560 [2024-11-27 08:42:43.077753] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.560 [2024-11-27 08:42:43.093808] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.934 ************************************ 00:10:47.934 END TEST raid_state_function_test 00:10:47.934 ************************************ 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:47.934 00:10:47.934 real 0m5.651s 00:10:47.934 user 0m8.437s 00:10:47.934 sys 0m0.816s 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.934 08:42:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:10:47.934 08:42:44 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:10:47.934 08:42:44 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:47.934 08:42:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.934 ************************************ 00:10:47.934 START TEST raid_state_function_test_sb 00:10:47.934 ************************************ 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # raid_state_function_test raid1 2 true 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63036 00:10:47.934 Process raid pid: 63036 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63036' 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63036 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # '[' -z 63036 ']' 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:47.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:47.934 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.934 [2024-11-27 08:42:44.432855] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:10:47.934 [2024-11-27 08:42:44.433048] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.934 [2024-11-27 08:42:44.615871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.194 [2024-11-27 08:42:44.767338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.452 [2024-11-27 08:42:45.004935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.452 [2024-11-27 08:42:45.005046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@865 -- # return 0 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.711 [2024-11-27 08:42:45.396308] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.711 [2024-11-27 08:42:45.396389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.711 [2024-11-27 08:42:45.396410] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.711 [2024-11-27 08:42:45.396428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.711 "name": "Existed_Raid", 00:10:48.711 "uuid": "3ca17b62-3f7c-4bf6-a05b-fd8a57c6cd0f", 00:10:48.711 "strip_size_kb": 0, 00:10:48.711 "state": "configuring", 00:10:48.711 "raid_level": "raid1", 00:10:48.711 "superblock": true, 00:10:48.711 "num_base_bdevs": 2, 00:10:48.711 "num_base_bdevs_discovered": 0, 00:10:48.711 "num_base_bdevs_operational": 2, 00:10:48.711 "base_bdevs_list": [ 00:10:48.711 { 00:10:48.711 "name": "BaseBdev1", 00:10:48.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.711 "is_configured": false, 00:10:48.711 "data_offset": 0, 00:10:48.711 "data_size": 0 00:10:48.711 }, 00:10:48.711 { 00:10:48.711 "name": "BaseBdev2", 00:10:48.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.711 "is_configured": false, 00:10:48.711 "data_offset": 0, 00:10:48.711 "data_size": 0 00:10:48.711 } 00:10:48.711 ] 00:10:48.711 }' 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.711 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 [2024-11-27 08:42:45.916396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.277 [2024-11-27 08:42:45.916448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 [2024-11-27 08:42:45.924359] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.277 [2024-11-27 08:42:45.924410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.277 [2024-11-27 08:42:45.924427] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.277 [2024-11-27 08:42:45.924447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 [2024-11-27 08:42:45.974085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.277 BaseBdev1 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.277 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.277 [ 00:10:49.277 { 00:10:49.277 "name": "BaseBdev1", 00:10:49.277 "aliases": [ 00:10:49.277 "a1d1f87f-4291-402d-98be-8493ef556dc1" 00:10:49.277 ], 00:10:49.277 "product_name": "Malloc disk", 00:10:49.277 "block_size": 512, 00:10:49.277 "num_blocks": 65536, 00:10:49.277 "uuid": "a1d1f87f-4291-402d-98be-8493ef556dc1", 00:10:49.277 "assigned_rate_limits": { 00:10:49.277 "rw_ios_per_sec": 0, 00:10:49.277 "rw_mbytes_per_sec": 0, 00:10:49.277 "r_mbytes_per_sec": 0, 00:10:49.277 "w_mbytes_per_sec": 0 00:10:49.277 }, 00:10:49.277 "claimed": true, 00:10:49.277 "claim_type": "exclusive_write", 00:10:49.277 "zoned": false, 00:10:49.277 "supported_io_types": { 00:10:49.277 "read": true, 00:10:49.277 "write": true, 00:10:49.277 "unmap": true, 00:10:49.277 "flush": true, 00:10:49.277 "reset": true, 00:10:49.277 "nvme_admin": false, 00:10:49.277 "nvme_io": false, 00:10:49.277 "nvme_io_md": false, 00:10:49.277 "write_zeroes": true, 00:10:49.277 "zcopy": true, 00:10:49.277 "get_zone_info": false, 00:10:49.277 "zone_management": false, 00:10:49.277 "zone_append": false, 00:10:49.277 "compare": false, 00:10:49.277 "compare_and_write": false, 00:10:49.277 "abort": true, 00:10:49.277 "seek_hole": false, 00:10:49.277 "seek_data": false, 00:10:49.277 "copy": true, 00:10:49.277 "nvme_iov_md": false 00:10:49.277 }, 00:10:49.277 "memory_domains": [ 00:10:49.277 { 00:10:49.277 "dma_device_id": "system", 00:10:49.277 "dma_device_type": 1 00:10:49.277 }, 00:10:49.277 { 00:10:49.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.277 "dma_device_type": 2 00:10:49.277 } 00:10:49.277 ], 00:10:49.277 "driver_specific": {} 00:10:49.277 } 00:10:49.277 ] 00:10:49.277 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.277 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:10:49.277 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:49.278 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.278 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.278 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.278 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.278 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:49.278 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.278 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.278 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.278 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.278 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.278 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.278 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.278 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.278 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.580 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.580 "name": "Existed_Raid", 00:10:49.580 "uuid": "915754fc-59e4-40c6-bd71-6b065326e20a", 00:10:49.580 "strip_size_kb": 0, 00:10:49.580 "state": "configuring", 00:10:49.580 "raid_level": "raid1", 00:10:49.580 "superblock": true, 00:10:49.580 "num_base_bdevs": 2, 00:10:49.580 "num_base_bdevs_discovered": 1, 00:10:49.580 "num_base_bdevs_operational": 2, 00:10:49.580 "base_bdevs_list": [ 00:10:49.580 { 00:10:49.580 "name": "BaseBdev1", 00:10:49.580 "uuid": "a1d1f87f-4291-402d-98be-8493ef556dc1", 00:10:49.580 "is_configured": true, 00:10:49.580 "data_offset": 2048, 00:10:49.580 "data_size": 63488 00:10:49.580 }, 00:10:49.580 { 00:10:49.580 "name": "BaseBdev2", 00:10:49.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.580 "is_configured": false, 00:10:49.580 "data_offset": 0, 00:10:49.580 "data_size": 0 00:10:49.580 } 00:10:49.580 ] 00:10:49.580 }' 00:10:49.580 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.580 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.838 [2024-11-27 08:42:46.534347] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.838 [2024-11-27 08:42:46.534432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.838 [2024-11-27 08:42:46.542381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.838 [2024-11-27 08:42:46.545027] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.838 [2024-11-27 08:42:46.545080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.838 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.839 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.839 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.097 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.097 "name": "Existed_Raid", 00:10:50.097 "uuid": "beeea498-5d90-41be-803e-952d86ec4b18", 00:10:50.097 "strip_size_kb": 0, 00:10:50.097 "state": "configuring", 00:10:50.097 "raid_level": "raid1", 00:10:50.097 "superblock": true, 00:10:50.097 "num_base_bdevs": 2, 00:10:50.097 "num_base_bdevs_discovered": 1, 00:10:50.097 "num_base_bdevs_operational": 2, 00:10:50.097 "base_bdevs_list": [ 00:10:50.097 { 00:10:50.097 "name": "BaseBdev1", 00:10:50.097 "uuid": "a1d1f87f-4291-402d-98be-8493ef556dc1", 00:10:50.097 "is_configured": true, 00:10:50.097 "data_offset": 2048, 00:10:50.097 "data_size": 63488 00:10:50.097 }, 00:10:50.097 { 00:10:50.097 "name": "BaseBdev2", 00:10:50.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.097 "is_configured": false, 00:10:50.097 "data_offset": 0, 00:10:50.097 "data_size": 0 00:10:50.097 } 00:10:50.097 ] 00:10:50.097 }' 00:10:50.097 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.097 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.354 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:50.354 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.355 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.612 [2024-11-27 08:42:47.116860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.612 [2024-11-27 08:42:47.117505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:50.612 [2024-11-27 08:42:47.117665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:50.612 BaseBdev2 00:10:50.612 [2024-11-27 08:42:47.118069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:50.612 [2024-11-27 08:42:47.118320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:50.612 [2024-11-27 08:42:47.118369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:50.612 [2024-11-27 08:42:47.118568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.612 [ 00:10:50.612 { 00:10:50.612 "name": "BaseBdev2", 00:10:50.612 "aliases": [ 00:10:50.612 "8bea2a5a-9c1c-4aee-8884-334d8b561c00" 00:10:50.612 ], 00:10:50.612 "product_name": "Malloc disk", 00:10:50.612 "block_size": 512, 00:10:50.612 "num_blocks": 65536, 00:10:50.612 "uuid": "8bea2a5a-9c1c-4aee-8884-334d8b561c00", 00:10:50.612 "assigned_rate_limits": { 00:10:50.612 "rw_ios_per_sec": 0, 00:10:50.612 "rw_mbytes_per_sec": 0, 00:10:50.612 "r_mbytes_per_sec": 0, 00:10:50.612 "w_mbytes_per_sec": 0 00:10:50.612 }, 00:10:50.612 "claimed": true, 00:10:50.612 "claim_type": "exclusive_write", 00:10:50.612 "zoned": false, 00:10:50.612 "supported_io_types": { 00:10:50.612 "read": true, 00:10:50.612 "write": true, 00:10:50.612 "unmap": true, 00:10:50.612 "flush": true, 00:10:50.612 "reset": true, 00:10:50.612 "nvme_admin": false, 00:10:50.612 "nvme_io": false, 00:10:50.612 "nvme_io_md": false, 00:10:50.612 "write_zeroes": true, 00:10:50.612 "zcopy": true, 00:10:50.612 "get_zone_info": false, 00:10:50.612 "zone_management": false, 00:10:50.612 "zone_append": false, 00:10:50.612 "compare": false, 00:10:50.612 "compare_and_write": false, 00:10:50.612 "abort": true, 00:10:50.612 "seek_hole": false, 00:10:50.612 "seek_data": false, 00:10:50.612 "copy": true, 00:10:50.612 "nvme_iov_md": false 00:10:50.612 }, 00:10:50.612 "memory_domains": [ 00:10:50.612 { 00:10:50.612 "dma_device_id": "system", 00:10:50.612 "dma_device_type": 1 00:10:50.612 }, 00:10:50.612 { 00:10:50.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.612 "dma_device_type": 2 00:10:50.612 } 00:10:50.612 ], 00:10:50.612 "driver_specific": {} 00:10:50.612 } 00:10:50.612 ] 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.612 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.613 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.613 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.613 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.613 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.613 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.613 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.613 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.613 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.613 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.613 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.613 "name": "Existed_Raid", 00:10:50.613 "uuid": "beeea498-5d90-41be-803e-952d86ec4b18", 00:10:50.613 "strip_size_kb": 0, 00:10:50.613 "state": "online", 00:10:50.613 "raid_level": "raid1", 00:10:50.613 "superblock": true, 00:10:50.613 "num_base_bdevs": 2, 00:10:50.613 "num_base_bdevs_discovered": 2, 00:10:50.613 "num_base_bdevs_operational": 2, 00:10:50.613 "base_bdevs_list": [ 00:10:50.613 { 00:10:50.613 "name": "BaseBdev1", 00:10:50.613 "uuid": "a1d1f87f-4291-402d-98be-8493ef556dc1", 00:10:50.613 "is_configured": true, 00:10:50.613 "data_offset": 2048, 00:10:50.613 "data_size": 63488 00:10:50.613 }, 00:10:50.613 { 00:10:50.613 "name": "BaseBdev2", 00:10:50.613 "uuid": "8bea2a5a-9c1c-4aee-8884-334d8b561c00", 00:10:50.613 "is_configured": true, 00:10:50.613 "data_offset": 2048, 00:10:50.613 "data_size": 63488 00:10:50.613 } 00:10:50.613 ] 00:10:50.613 }' 00:10:50.613 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.613 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.178 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:51.178 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:51.178 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.178 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.178 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.178 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.178 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:51.178 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.178 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.178 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.178 [2024-11-27 08:42:47.653509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.178 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.178 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.178 "name": "Existed_Raid", 00:10:51.178 "aliases": [ 00:10:51.178 "beeea498-5d90-41be-803e-952d86ec4b18" 00:10:51.178 ], 00:10:51.178 "product_name": "Raid Volume", 00:10:51.178 "block_size": 512, 00:10:51.178 "num_blocks": 63488, 00:10:51.178 "uuid": "beeea498-5d90-41be-803e-952d86ec4b18", 00:10:51.178 "assigned_rate_limits": { 00:10:51.178 "rw_ios_per_sec": 0, 00:10:51.178 "rw_mbytes_per_sec": 0, 00:10:51.178 "r_mbytes_per_sec": 0, 00:10:51.178 "w_mbytes_per_sec": 0 00:10:51.178 }, 00:10:51.178 "claimed": false, 00:10:51.178 "zoned": false, 00:10:51.178 "supported_io_types": { 00:10:51.178 "read": true, 00:10:51.178 "write": true, 00:10:51.178 "unmap": false, 00:10:51.178 "flush": false, 00:10:51.178 "reset": true, 00:10:51.178 "nvme_admin": false, 00:10:51.178 "nvme_io": false, 00:10:51.178 "nvme_io_md": false, 00:10:51.178 "write_zeroes": true, 00:10:51.178 "zcopy": false, 00:10:51.178 "get_zone_info": false, 00:10:51.178 "zone_management": false, 00:10:51.178 "zone_append": false, 00:10:51.178 "compare": false, 00:10:51.178 "compare_and_write": false, 00:10:51.179 "abort": false, 00:10:51.179 "seek_hole": false, 00:10:51.179 "seek_data": false, 00:10:51.179 "copy": false, 00:10:51.179 "nvme_iov_md": false 00:10:51.179 }, 00:10:51.179 "memory_domains": [ 00:10:51.179 { 00:10:51.179 "dma_device_id": "system", 00:10:51.179 "dma_device_type": 1 00:10:51.179 }, 00:10:51.179 { 00:10:51.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.179 "dma_device_type": 2 00:10:51.179 }, 00:10:51.179 { 00:10:51.179 "dma_device_id": "system", 00:10:51.179 "dma_device_type": 1 00:10:51.179 }, 00:10:51.179 { 00:10:51.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.179 "dma_device_type": 2 00:10:51.179 } 00:10:51.179 ], 00:10:51.179 "driver_specific": { 00:10:51.179 "raid": { 00:10:51.179 "uuid": "beeea498-5d90-41be-803e-952d86ec4b18", 00:10:51.179 "strip_size_kb": 0, 00:10:51.179 "state": "online", 00:10:51.179 "raid_level": "raid1", 00:10:51.179 "superblock": true, 00:10:51.179 "num_base_bdevs": 2, 00:10:51.179 "num_base_bdevs_discovered": 2, 00:10:51.179 "num_base_bdevs_operational": 2, 00:10:51.179 "base_bdevs_list": [ 00:10:51.179 { 00:10:51.179 "name": "BaseBdev1", 00:10:51.179 "uuid": "a1d1f87f-4291-402d-98be-8493ef556dc1", 00:10:51.179 "is_configured": true, 00:10:51.179 "data_offset": 2048, 00:10:51.179 "data_size": 63488 00:10:51.179 }, 00:10:51.179 { 00:10:51.179 "name": "BaseBdev2", 00:10:51.179 "uuid": "8bea2a5a-9c1c-4aee-8884-334d8b561c00", 00:10:51.179 "is_configured": true, 00:10:51.179 "data_offset": 2048, 00:10:51.179 "data_size": 63488 00:10:51.179 } 00:10:51.179 ] 00:10:51.179 } 00:10:51.179 } 00:10:51.179 }' 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:51.179 BaseBdev2' 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.179 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.179 [2024-11-27 08:42:47.893246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.438 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.438 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.438 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.438 "name": "Existed_Raid", 00:10:51.438 "uuid": "beeea498-5d90-41be-803e-952d86ec4b18", 00:10:51.438 "strip_size_kb": 0, 00:10:51.438 "state": "online", 00:10:51.438 "raid_level": "raid1", 00:10:51.438 "superblock": true, 00:10:51.438 "num_base_bdevs": 2, 00:10:51.438 "num_base_bdevs_discovered": 1, 00:10:51.438 "num_base_bdevs_operational": 1, 00:10:51.438 "base_bdevs_list": [ 00:10:51.438 { 00:10:51.438 "name": null, 00:10:51.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.438 "is_configured": false, 00:10:51.438 "data_offset": 0, 00:10:51.438 "data_size": 63488 00:10:51.438 }, 00:10:51.438 { 00:10:51.438 "name": "BaseBdev2", 00:10:51.438 "uuid": "8bea2a5a-9c1c-4aee-8884-334d8b561c00", 00:10:51.438 "is_configured": true, 00:10:51.438 "data_offset": 2048, 00:10:51.438 "data_size": 63488 00:10:51.438 } 00:10:51.438 ] 00:10:51.438 }' 00:10:51.438 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.438 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.006 [2024-11-27 08:42:48.562291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:52.006 [2024-11-27 08:42:48.562487] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.006 [2024-11-27 08:42:48.658083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.006 [2024-11-27 08:42:48.658213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.006 [2024-11-27 08:42:48.658238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63036 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' -z 63036 ']' 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # kill -0 63036 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # uname 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 63036 00:10:52.006 killing process with pid 63036 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # echo 'killing process with pid 63036' 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # kill 63036 00:10:52.006 [2024-11-27 08:42:48.750974] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:52.006 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@975 -- # wait 63036 00:10:52.266 [2024-11-27 08:42:48.768328] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.286 08:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:53.286 00:10:53.286 real 0m5.642s 00:10:53.286 user 0m8.356s 00:10:53.286 sys 0m0.867s 00:10:53.286 08:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:53.286 ************************************ 00:10:53.286 END TEST raid_state_function_test_sb 00:10:53.286 ************************************ 00:10:53.286 08:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.286 08:42:50 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:10:53.286 08:42:50 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:10:53.286 08:42:50 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:53.286 08:42:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.286 ************************************ 00:10:53.286 START TEST raid_superblock_test 00:10:53.286 ************************************ 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # raid_superblock_test raid1 2 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:53.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63294 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63294 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:53.286 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # '[' -z 63294 ']' 00:10:53.287 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.287 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:53.287 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.287 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:53.287 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.546 [2024-11-27 08:42:50.167551] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:10:53.546 [2024-11-27 08:42:50.168165] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63294 ] 00:10:53.806 [2024-11-27 08:42:50.358943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.806 [2024-11-27 08:42:50.507886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.065 [2024-11-27 08:42:50.748861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.065 [2024-11-27 08:42:50.748942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@865 -- # return 0 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.633 malloc1 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.633 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.633 [2024-11-27 08:42:51.203504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:54.633 [2024-11-27 08:42:51.204060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.634 [2024-11-27 08:42:51.204158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:54.634 [2024-11-27 08:42:51.204196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.634 [2024-11-27 08:42:51.209524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.634 [2024-11-27 08:42:51.209622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:54.634 pt1 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.634 malloc2 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.634 [2024-11-27 08:42:51.283532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:54.634 [2024-11-27 08:42:51.283643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.634 [2024-11-27 08:42:51.283676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:54.634 [2024-11-27 08:42:51.283691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.634 [2024-11-27 08:42:51.287058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.634 pt2 00:10:54.634 [2024-11-27 08:42:51.287394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.634 [2024-11-27 08:42:51.291705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:54.634 [2024-11-27 08:42:51.294664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:54.634 [2024-11-27 08:42:51.295086] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:54.634 [2024-11-27 08:42:51.295117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:54.634 [2024-11-27 08:42:51.295534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:54.634 [2024-11-27 08:42:51.295804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:54.634 [2024-11-27 08:42:51.295840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:54.634 [2024-11-27 08:42:51.296115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.634 "name": "raid_bdev1", 00:10:54.634 "uuid": "bfb6a67a-8376-46a0-adc6-20541b6ed718", 00:10:54.634 "strip_size_kb": 0, 00:10:54.634 "state": "online", 00:10:54.634 "raid_level": "raid1", 00:10:54.634 "superblock": true, 00:10:54.634 "num_base_bdevs": 2, 00:10:54.634 "num_base_bdevs_discovered": 2, 00:10:54.634 "num_base_bdevs_operational": 2, 00:10:54.634 "base_bdevs_list": [ 00:10:54.634 { 00:10:54.634 "name": "pt1", 00:10:54.634 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:54.634 "is_configured": true, 00:10:54.634 "data_offset": 2048, 00:10:54.634 "data_size": 63488 00:10:54.634 }, 00:10:54.634 { 00:10:54.634 "name": "pt2", 00:10:54.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:54.634 "is_configured": true, 00:10:54.634 "data_offset": 2048, 00:10:54.634 "data_size": 63488 00:10:54.634 } 00:10:54.634 ] 00:10:54.634 }' 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.634 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.202 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:55.202 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:55.202 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:55.202 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:55.202 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:55.202 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:55.202 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:55.202 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:55.202 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.202 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.202 [2024-11-27 08:42:51.820670] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.202 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.202 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:55.202 "name": "raid_bdev1", 00:10:55.202 "aliases": [ 00:10:55.202 "bfb6a67a-8376-46a0-adc6-20541b6ed718" 00:10:55.202 ], 00:10:55.202 "product_name": "Raid Volume", 00:10:55.202 "block_size": 512, 00:10:55.202 "num_blocks": 63488, 00:10:55.202 "uuid": "bfb6a67a-8376-46a0-adc6-20541b6ed718", 00:10:55.202 "assigned_rate_limits": { 00:10:55.202 "rw_ios_per_sec": 0, 00:10:55.202 "rw_mbytes_per_sec": 0, 00:10:55.202 "r_mbytes_per_sec": 0, 00:10:55.202 "w_mbytes_per_sec": 0 00:10:55.202 }, 00:10:55.202 "claimed": false, 00:10:55.202 "zoned": false, 00:10:55.202 "supported_io_types": { 00:10:55.202 "read": true, 00:10:55.202 "write": true, 00:10:55.202 "unmap": false, 00:10:55.202 "flush": false, 00:10:55.202 "reset": true, 00:10:55.202 "nvme_admin": false, 00:10:55.202 "nvme_io": false, 00:10:55.202 "nvme_io_md": false, 00:10:55.202 "write_zeroes": true, 00:10:55.202 "zcopy": false, 00:10:55.202 "get_zone_info": false, 00:10:55.202 "zone_management": false, 00:10:55.202 "zone_append": false, 00:10:55.202 "compare": false, 00:10:55.202 "compare_and_write": false, 00:10:55.202 "abort": false, 00:10:55.202 "seek_hole": false, 00:10:55.202 "seek_data": false, 00:10:55.202 "copy": false, 00:10:55.202 "nvme_iov_md": false 00:10:55.202 }, 00:10:55.202 "memory_domains": [ 00:10:55.202 { 00:10:55.202 "dma_device_id": "system", 00:10:55.202 "dma_device_type": 1 00:10:55.202 }, 00:10:55.202 { 00:10:55.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.202 "dma_device_type": 2 00:10:55.202 }, 00:10:55.202 { 00:10:55.202 "dma_device_id": "system", 00:10:55.202 "dma_device_type": 1 00:10:55.202 }, 00:10:55.202 { 00:10:55.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.202 "dma_device_type": 2 00:10:55.202 } 00:10:55.202 ], 00:10:55.202 "driver_specific": { 00:10:55.202 "raid": { 00:10:55.202 "uuid": "bfb6a67a-8376-46a0-adc6-20541b6ed718", 00:10:55.202 "strip_size_kb": 0, 00:10:55.202 "state": "online", 00:10:55.202 "raid_level": "raid1", 00:10:55.202 "superblock": true, 00:10:55.202 "num_base_bdevs": 2, 00:10:55.202 "num_base_bdevs_discovered": 2, 00:10:55.202 "num_base_bdevs_operational": 2, 00:10:55.202 "base_bdevs_list": [ 00:10:55.202 { 00:10:55.202 "name": "pt1", 00:10:55.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:55.202 "is_configured": true, 00:10:55.202 "data_offset": 2048, 00:10:55.202 "data_size": 63488 00:10:55.202 }, 00:10:55.202 { 00:10:55.202 "name": "pt2", 00:10:55.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.202 "is_configured": true, 00:10:55.202 "data_offset": 2048, 00:10:55.202 "data_size": 63488 00:10:55.202 } 00:10:55.202 ] 00:10:55.202 } 00:10:55.202 } 00:10:55.202 }' 00:10:55.202 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.202 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:55.202 pt2' 00:10:55.202 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.462 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:55.462 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.462 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:55.462 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.462 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.462 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.462 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:55.462 [2024-11-27 08:42:52.092761] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bfb6a67a-8376-46a0-adc6-20541b6ed718 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bfb6a67a-8376-46a0-adc6-20541b6ed718 ']' 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.462 [2024-11-27 08:42:52.140325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:55.462 [2024-11-27 08:42:52.140410] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.462 [2024-11-27 08:42:52.140576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.462 [2024-11-27 08:42:52.140685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.462 [2024-11-27 08:42:52.140712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.462 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.463 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:55.463 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:55.463 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.463 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.722 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.722 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:55.722 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:55.722 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:55.722 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:55.722 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:55.722 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.722 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:55.722 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.722 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:55.722 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.722 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.722 [2024-11-27 08:42:52.272442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:55.722 [2024-11-27 08:42:52.275336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:55.722 [2024-11-27 08:42:52.275632] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:55.722 [2024-11-27 08:42:52.275873] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:55.722 [2024-11-27 08:42:52.276068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:55.722 [2024-11-27 08:42:52.276181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:55.722 request: 00:10:55.722 { 00:10:55.722 "name": "raid_bdev1", 00:10:55.723 "raid_level": "raid1", 00:10:55.723 "base_bdevs": [ 00:10:55.723 "malloc1", 00:10:55.723 "malloc2" 00:10:55.723 ], 00:10:55.723 "superblock": false, 00:10:55.723 "method": "bdev_raid_create", 00:10:55.723 "req_id": 1 00:10:55.723 } 00:10:55.723 Got JSON-RPC error response 00:10:55.723 response: 00:10:55.723 { 00:10:55.723 "code": -17, 00:10:55.723 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:55.723 } 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.723 [2024-11-27 08:42:52.340568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:55.723 [2024-11-27 08:42:52.340663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.723 [2024-11-27 08:42:52.340714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:55.723 [2024-11-27 08:42:52.340732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.723 [2024-11-27 08:42:52.344214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.723 [2024-11-27 08:42:52.344272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:55.723 [2024-11-27 08:42:52.344399] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:55.723 [2024-11-27 08:42:52.344480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:55.723 pt1 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.723 "name": "raid_bdev1", 00:10:55.723 "uuid": "bfb6a67a-8376-46a0-adc6-20541b6ed718", 00:10:55.723 "strip_size_kb": 0, 00:10:55.723 "state": "configuring", 00:10:55.723 "raid_level": "raid1", 00:10:55.723 "superblock": true, 00:10:55.723 "num_base_bdevs": 2, 00:10:55.723 "num_base_bdevs_discovered": 1, 00:10:55.723 "num_base_bdevs_operational": 2, 00:10:55.723 "base_bdevs_list": [ 00:10:55.723 { 00:10:55.723 "name": "pt1", 00:10:55.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:55.723 "is_configured": true, 00:10:55.723 "data_offset": 2048, 00:10:55.723 "data_size": 63488 00:10:55.723 }, 00:10:55.723 { 00:10:55.723 "name": null, 00:10:55.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.723 "is_configured": false, 00:10:55.723 "data_offset": 2048, 00:10:55.723 "data_size": 63488 00:10:55.723 } 00:10:55.723 ] 00:10:55.723 }' 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.723 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.292 [2024-11-27 08:42:52.868890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:56.292 [2024-11-27 08:42:52.869023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.292 [2024-11-27 08:42:52.869063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:56.292 [2024-11-27 08:42:52.869083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.292 [2024-11-27 08:42:52.869818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.292 [2024-11-27 08:42:52.869867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:56.292 [2024-11-27 08:42:52.869988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:56.292 [2024-11-27 08:42:52.870029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:56.292 [2024-11-27 08:42:52.870211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:56.292 [2024-11-27 08:42:52.870234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:56.292 [2024-11-27 08:42:52.870573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:56.292 [2024-11-27 08:42:52.871043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:56.292 [2024-11-27 08:42:52.871066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:56.292 [2024-11-27 08:42:52.871256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.292 pt2 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.292 "name": "raid_bdev1", 00:10:56.292 "uuid": "bfb6a67a-8376-46a0-adc6-20541b6ed718", 00:10:56.292 "strip_size_kb": 0, 00:10:56.292 "state": "online", 00:10:56.292 "raid_level": "raid1", 00:10:56.292 "superblock": true, 00:10:56.292 "num_base_bdevs": 2, 00:10:56.292 "num_base_bdevs_discovered": 2, 00:10:56.292 "num_base_bdevs_operational": 2, 00:10:56.292 "base_bdevs_list": [ 00:10:56.292 { 00:10:56.292 "name": "pt1", 00:10:56.292 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.292 "is_configured": true, 00:10:56.292 "data_offset": 2048, 00:10:56.292 "data_size": 63488 00:10:56.292 }, 00:10:56.292 { 00:10:56.292 "name": "pt2", 00:10:56.292 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.292 "is_configured": true, 00:10:56.292 "data_offset": 2048, 00:10:56.292 "data_size": 63488 00:10:56.292 } 00:10:56.292 ] 00:10:56.292 }' 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.292 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.861 [2024-11-27 08:42:53.417389] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.861 "name": "raid_bdev1", 00:10:56.861 "aliases": [ 00:10:56.861 "bfb6a67a-8376-46a0-adc6-20541b6ed718" 00:10:56.861 ], 00:10:56.861 "product_name": "Raid Volume", 00:10:56.861 "block_size": 512, 00:10:56.861 "num_blocks": 63488, 00:10:56.861 "uuid": "bfb6a67a-8376-46a0-adc6-20541b6ed718", 00:10:56.861 "assigned_rate_limits": { 00:10:56.861 "rw_ios_per_sec": 0, 00:10:56.861 "rw_mbytes_per_sec": 0, 00:10:56.861 "r_mbytes_per_sec": 0, 00:10:56.861 "w_mbytes_per_sec": 0 00:10:56.861 }, 00:10:56.861 "claimed": false, 00:10:56.861 "zoned": false, 00:10:56.861 "supported_io_types": { 00:10:56.861 "read": true, 00:10:56.861 "write": true, 00:10:56.861 "unmap": false, 00:10:56.861 "flush": false, 00:10:56.861 "reset": true, 00:10:56.861 "nvme_admin": false, 00:10:56.861 "nvme_io": false, 00:10:56.861 "nvme_io_md": false, 00:10:56.861 "write_zeroes": true, 00:10:56.861 "zcopy": false, 00:10:56.861 "get_zone_info": false, 00:10:56.861 "zone_management": false, 00:10:56.861 "zone_append": false, 00:10:56.861 "compare": false, 00:10:56.861 "compare_and_write": false, 00:10:56.861 "abort": false, 00:10:56.861 "seek_hole": false, 00:10:56.861 "seek_data": false, 00:10:56.861 "copy": false, 00:10:56.861 "nvme_iov_md": false 00:10:56.861 }, 00:10:56.861 "memory_domains": [ 00:10:56.861 { 00:10:56.861 "dma_device_id": "system", 00:10:56.861 "dma_device_type": 1 00:10:56.861 }, 00:10:56.861 { 00:10:56.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.861 "dma_device_type": 2 00:10:56.861 }, 00:10:56.861 { 00:10:56.861 "dma_device_id": "system", 00:10:56.861 "dma_device_type": 1 00:10:56.861 }, 00:10:56.861 { 00:10:56.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.861 "dma_device_type": 2 00:10:56.861 } 00:10:56.861 ], 00:10:56.861 "driver_specific": { 00:10:56.861 "raid": { 00:10:56.861 "uuid": "bfb6a67a-8376-46a0-adc6-20541b6ed718", 00:10:56.861 "strip_size_kb": 0, 00:10:56.861 "state": "online", 00:10:56.861 "raid_level": "raid1", 00:10:56.861 "superblock": true, 00:10:56.861 "num_base_bdevs": 2, 00:10:56.861 "num_base_bdevs_discovered": 2, 00:10:56.861 "num_base_bdevs_operational": 2, 00:10:56.861 "base_bdevs_list": [ 00:10:56.861 { 00:10:56.861 "name": "pt1", 00:10:56.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.861 "is_configured": true, 00:10:56.861 "data_offset": 2048, 00:10:56.861 "data_size": 63488 00:10:56.861 }, 00:10:56.861 { 00:10:56.861 "name": "pt2", 00:10:56.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.861 "is_configured": true, 00:10:56.861 "data_offset": 2048, 00:10:56.861 "data_size": 63488 00:10:56.861 } 00:10:56.861 ] 00:10:56.861 } 00:10:56.861 } 00:10:56.861 }' 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:56.861 pt2' 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.861 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:57.121 [2024-11-27 08:42:53.685529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bfb6a67a-8376-46a0-adc6-20541b6ed718 '!=' bfb6a67a-8376-46a0-adc6-20541b6ed718 ']' 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.121 [2024-11-27 08:42:53.741197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.121 "name": "raid_bdev1", 00:10:57.121 "uuid": "bfb6a67a-8376-46a0-adc6-20541b6ed718", 00:10:57.121 "strip_size_kb": 0, 00:10:57.121 "state": "online", 00:10:57.121 "raid_level": "raid1", 00:10:57.121 "superblock": true, 00:10:57.121 "num_base_bdevs": 2, 00:10:57.121 "num_base_bdevs_discovered": 1, 00:10:57.121 "num_base_bdevs_operational": 1, 00:10:57.121 "base_bdevs_list": [ 00:10:57.121 { 00:10:57.121 "name": null, 00:10:57.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.121 "is_configured": false, 00:10:57.121 "data_offset": 0, 00:10:57.121 "data_size": 63488 00:10:57.121 }, 00:10:57.121 { 00:10:57.121 "name": "pt2", 00:10:57.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.121 "is_configured": true, 00:10:57.121 "data_offset": 2048, 00:10:57.121 "data_size": 63488 00:10:57.121 } 00:10:57.121 ] 00:10:57.121 }' 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.121 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.690 [2024-11-27 08:42:54.273287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.690 [2024-11-27 08:42:54.273333] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.690 [2024-11-27 08:42:54.273516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.690 [2024-11-27 08:42:54.273593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.690 [2024-11-27 08:42:54.273614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.690 [2024-11-27 08:42:54.349263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:57.690 [2024-11-27 08:42:54.349586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.690 [2024-11-27 08:42:54.349662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:57.690 [2024-11-27 08:42:54.349884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.690 [2024-11-27 08:42:54.353333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.690 [2024-11-27 08:42:54.353563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:57.690 [2024-11-27 08:42:54.353800] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:57.690 [2024-11-27 08:42:54.353997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:57.690 [2024-11-27 08:42:54.354350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:57.690 [2024-11-27 08:42:54.354491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:57.690 pt2 00:10:57.690 [2024-11-27 08:42:54.354848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:57.690 [2024-11-27 08:42:54.355098] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.690 [2024-11-27 08:42:54.355117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:57.690 [2024-11-27 08:42:54.355301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.690 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.690 "name": "raid_bdev1", 00:10:57.690 "uuid": "bfb6a67a-8376-46a0-adc6-20541b6ed718", 00:10:57.690 "strip_size_kb": 0, 00:10:57.690 "state": "online", 00:10:57.690 "raid_level": "raid1", 00:10:57.690 "superblock": true, 00:10:57.690 "num_base_bdevs": 2, 00:10:57.690 "num_base_bdevs_discovered": 1, 00:10:57.690 "num_base_bdevs_operational": 1, 00:10:57.690 "base_bdevs_list": [ 00:10:57.690 { 00:10:57.690 "name": null, 00:10:57.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.690 "is_configured": false, 00:10:57.690 "data_offset": 2048, 00:10:57.690 "data_size": 63488 00:10:57.690 }, 00:10:57.690 { 00:10:57.690 "name": "pt2", 00:10:57.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.691 "is_configured": true, 00:10:57.691 "data_offset": 2048, 00:10:57.691 "data_size": 63488 00:10:57.691 } 00:10:57.691 ] 00:10:57.691 }' 00:10:57.691 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.691 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.259 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:58.259 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.259 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.259 [2024-11-27 08:42:54.870058] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:58.259 [2024-11-27 08:42:54.870114] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.259 [2024-11-27 08:42:54.870230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.259 [2024-11-27 08:42:54.870322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.260 [2024-11-27 08:42:54.870356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.260 [2024-11-27 08:42:54.934089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:58.260 [2024-11-27 08:42:54.934175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.260 [2024-11-27 08:42:54.934209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:10:58.260 [2024-11-27 08:42:54.934224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.260 [2024-11-27 08:42:54.937803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.260 [2024-11-27 08:42:54.937847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:58.260 [2024-11-27 08:42:54.937958] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:58.260 [2024-11-27 08:42:54.938053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:58.260 [2024-11-27 08:42:54.938244] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:58.260 [2024-11-27 08:42:54.938262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:58.260 [2024-11-27 08:42:54.938285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:58.260 [2024-11-27 08:42:54.938375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.260 [2024-11-27 08:42:54.938533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:58.260 [2024-11-27 08:42:54.938550] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:58.260 pt1 00:10:58.260 [2024-11-27 08:42:54.938899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:58.260 [2024-11-27 08:42:54.939120] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:58.260 [2024-11-27 08:42:54.939141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.260 [2024-11-27 08:42:54.939322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.260 "name": "raid_bdev1", 00:10:58.260 "uuid": "bfb6a67a-8376-46a0-adc6-20541b6ed718", 00:10:58.260 "strip_size_kb": 0, 00:10:58.260 "state": "online", 00:10:58.260 "raid_level": "raid1", 00:10:58.260 "superblock": true, 00:10:58.260 "num_base_bdevs": 2, 00:10:58.260 "num_base_bdevs_discovered": 1, 00:10:58.260 "num_base_bdevs_operational": 1, 00:10:58.260 "base_bdevs_list": [ 00:10:58.260 { 00:10:58.260 "name": null, 00:10:58.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.260 "is_configured": false, 00:10:58.260 "data_offset": 2048, 00:10:58.260 "data_size": 63488 00:10:58.260 }, 00:10:58.260 { 00:10:58.260 "name": "pt2", 00:10:58.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.260 "is_configured": true, 00:10:58.260 "data_offset": 2048, 00:10:58.260 "data_size": 63488 00:10:58.260 } 00:10:58.260 ] 00:10:58.260 }' 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.260 08:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.828 [2024-11-27 08:42:55.534901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' bfb6a67a-8376-46a0-adc6-20541b6ed718 '!=' bfb6a67a-8376-46a0-adc6-20541b6ed718 ']' 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63294 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' -z 63294 ']' 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # kill -0 63294 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # uname 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:58.828 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 63294 00:10:59.086 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:10:59.086 killing process with pid 63294 00:10:59.086 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:10:59.086 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 63294' 00:10:59.087 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # kill 63294 00:10:59.087 [2024-11-27 08:42:55.605929] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.087 08:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@975 -- # wait 63294 00:10:59.087 [2024-11-27 08:42:55.606063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.087 [2024-11-27 08:42:55.606146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.087 [2024-11-27 08:42:55.606172] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:59.087 [2024-11-27 08:42:55.795523] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.464 08:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:00.464 00:11:00.464 real 0m6.897s 00:11:00.464 user 0m10.783s 00:11:00.464 sys 0m1.045s 00:11:00.464 08:42:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:11:00.464 08:42:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.464 ************************************ 00:11:00.464 END TEST raid_superblock_test 00:11:00.464 ************************************ 00:11:00.465 08:42:56 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:11:00.465 08:42:56 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:11:00.465 08:42:56 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:11:00.465 08:42:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.465 ************************************ 00:11:00.465 START TEST raid_read_error_test 00:11:00.465 ************************************ 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test raid1 2 read 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.voM1Gi6FPT 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63630 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63630 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # '[' -z 63630 ']' 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:11:00.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:11:00.465 08:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.465 [2024-11-27 08:42:57.100977] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:11:00.465 [2024-11-27 08:42:57.101210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63630 ] 00:11:00.724 [2024-11-27 08:42:57.289044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.724 [2024-11-27 08:42:57.438168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.982 [2024-11-27 08:42:57.667718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.982 [2024-11-27 08:42:57.667834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@865 -- # return 0 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.550 BaseBdev1_malloc 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.550 true 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.550 [2024-11-27 08:42:58.157941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:01.550 [2024-11-27 08:42:58.158014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.550 [2024-11-27 08:42:58.158043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:01.550 [2024-11-27 08:42:58.158061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.550 [2024-11-27 08:42:58.161137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.550 [2024-11-27 08:42:58.161232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:01.550 BaseBdev1 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.550 BaseBdev2_malloc 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.550 true 00:11:01.550 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.551 [2024-11-27 08:42:58.219478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:01.551 [2024-11-27 08:42:58.219569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.551 [2024-11-27 08:42:58.219594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:01.551 [2024-11-27 08:42:58.219612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.551 [2024-11-27 08:42:58.222770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.551 [2024-11-27 08:42:58.222851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:01.551 BaseBdev2 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.551 [2024-11-27 08:42:58.227558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.551 [2024-11-27 08:42:58.230321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.551 [2024-11-27 08:42:58.230730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:01.551 [2024-11-27 08:42:58.230755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:01.551 [2024-11-27 08:42:58.231110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:01.551 [2024-11-27 08:42:58.231396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:01.551 [2024-11-27 08:42:58.231413] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:01.551 [2024-11-27 08:42:58.231667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.551 "name": "raid_bdev1", 00:11:01.551 "uuid": "f4b4a5b8-635e-4ab4-9b13-344541790eae", 00:11:01.551 "strip_size_kb": 0, 00:11:01.551 "state": "online", 00:11:01.551 "raid_level": "raid1", 00:11:01.551 "superblock": true, 00:11:01.551 "num_base_bdevs": 2, 00:11:01.551 "num_base_bdevs_discovered": 2, 00:11:01.551 "num_base_bdevs_operational": 2, 00:11:01.551 "base_bdevs_list": [ 00:11:01.551 { 00:11:01.551 "name": "BaseBdev1", 00:11:01.551 "uuid": "67d10d4c-26a7-5098-8220-a1ec60024a74", 00:11:01.551 "is_configured": true, 00:11:01.551 "data_offset": 2048, 00:11:01.551 "data_size": 63488 00:11:01.551 }, 00:11:01.551 { 00:11:01.551 "name": "BaseBdev2", 00:11:01.551 "uuid": "d565912d-614d-5cd7-89fb-08e8b8834e64", 00:11:01.551 "is_configured": true, 00:11:01.551 "data_offset": 2048, 00:11:01.551 "data_size": 63488 00:11:01.551 } 00:11:01.551 ] 00:11:01.551 }' 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.551 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.119 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:02.119 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:02.378 [2024-11-27 08:42:58.881480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:03.322 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:03.322 08:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.322 08:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.322 08:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.322 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:03.322 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:03.322 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:03.322 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:03.322 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:03.322 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.323 "name": "raid_bdev1", 00:11:03.323 "uuid": "f4b4a5b8-635e-4ab4-9b13-344541790eae", 00:11:03.323 "strip_size_kb": 0, 00:11:03.323 "state": "online", 00:11:03.323 "raid_level": "raid1", 00:11:03.323 "superblock": true, 00:11:03.323 "num_base_bdevs": 2, 00:11:03.323 "num_base_bdevs_discovered": 2, 00:11:03.323 "num_base_bdevs_operational": 2, 00:11:03.323 "base_bdevs_list": [ 00:11:03.323 { 00:11:03.323 "name": "BaseBdev1", 00:11:03.323 "uuid": "67d10d4c-26a7-5098-8220-a1ec60024a74", 00:11:03.323 "is_configured": true, 00:11:03.323 "data_offset": 2048, 00:11:03.323 "data_size": 63488 00:11:03.323 }, 00:11:03.323 { 00:11:03.323 "name": "BaseBdev2", 00:11:03.323 "uuid": "d565912d-614d-5cd7-89fb-08e8b8834e64", 00:11:03.323 "is_configured": true, 00:11:03.323 "data_offset": 2048, 00:11:03.323 "data_size": 63488 00:11:03.323 } 00:11:03.323 ] 00:11:03.323 }' 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.323 08:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.582 08:43:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:03.582 08:43:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.582 08:43:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.841 [2024-11-27 08:43:00.341430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:03.841 [2024-11-27 08:43:00.341493] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.841 [2024-11-27 08:43:00.345136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.841 [2024-11-27 08:43:00.345235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.841 [2024-11-27 08:43:00.345449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.841 [2024-11-27 08:43:00.345477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:03.841 { 00:11:03.841 "results": [ 00:11:03.841 { 00:11:03.841 "job": "raid_bdev1", 00:11:03.841 "core_mask": "0x1", 00:11:03.841 "workload": "randrw", 00:11:03.841 "percentage": 50, 00:11:03.841 "status": "finished", 00:11:03.841 "queue_depth": 1, 00:11:03.841 "io_size": 131072, 00:11:03.841 "runtime": 1.456995, 00:11:03.841 "iops": 10597.840074948781, 00:11:03.841 "mibps": 1324.7300093685976, 00:11:03.841 "io_failed": 0, 00:11:03.841 "io_timeout": 0, 00:11:03.841 "avg_latency_us": 89.83928360739708, 00:11:03.841 "min_latency_us": 42.589090909090906, 00:11:03.841 "max_latency_us": 2070.3418181818183 00:11:03.841 } 00:11:03.841 ], 00:11:03.841 "core_count": 1 00:11:03.841 } 00:11:03.841 08:43:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.841 08:43:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63630 00:11:03.841 08:43:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' -z 63630 ']' 00:11:03.841 08:43:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # kill -0 63630 00:11:03.842 08:43:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # uname 00:11:03.842 08:43:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:11:03.842 08:43:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 63630 00:11:03.842 08:43:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:11:03.842 killing process with pid 63630 00:11:03.842 08:43:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:11:03.842 08:43:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 63630' 00:11:03.842 08:43:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # kill 63630 00:11:03.842 [2024-11-27 08:43:00.382593] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.842 08:43:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@975 -- # wait 63630 00:11:03.842 [2024-11-27 08:43:00.512026] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:05.219 08:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.voM1Gi6FPT 00:11:05.219 08:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:05.219 08:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:05.219 08:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:05.219 08:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:05.219 08:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.219 08:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:05.219 08:43:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:05.219 00:11:05.219 real 0m4.797s 00:11:05.219 user 0m5.963s 00:11:05.219 sys 0m0.633s 00:11:05.219 08:43:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:11:05.219 ************************************ 00:11:05.219 END TEST raid_read_error_test 00:11:05.219 ************************************ 00:11:05.219 08:43:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.219 08:43:01 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:11:05.219 08:43:01 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:11:05.219 08:43:01 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:11:05.219 08:43:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:05.219 ************************************ 00:11:05.219 START TEST raid_write_error_test 00:11:05.219 ************************************ 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test raid1 2 write 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mmSzs7VS5b 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63781 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63781 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # '[' -z 63781 ']' 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:11:05.219 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.219 [2024-11-27 08:43:01.966912] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:11:05.219 [2024-11-27 08:43:01.967152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63781 ] 00:11:05.478 [2024-11-27 08:43:02.171674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.822 [2024-11-27 08:43:02.327355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.121 [2024-11-27 08:43:02.566987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.121 [2024-11-27 08:43:02.567067] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.380 08:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:11:06.380 08:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@865 -- # return 0 00:11:06.380 08:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.380 08:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:06.380 08:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.380 08:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.380 BaseBdev1_malloc 00:11:06.380 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.380 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:06.380 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.380 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.380 true 00:11:06.380 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.380 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.381 [2024-11-27 08:43:03.018629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:06.381 [2024-11-27 08:43:03.018735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.381 [2024-11-27 08:43:03.018767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:06.381 [2024-11-27 08:43:03.018786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.381 [2024-11-27 08:43:03.022085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.381 [2024-11-27 08:43:03.022175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:06.381 BaseBdev1 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.381 BaseBdev2_malloc 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.381 true 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.381 [2024-11-27 08:43:03.082238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:06.381 [2024-11-27 08:43:03.082317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.381 [2024-11-27 08:43:03.082357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:06.381 [2024-11-27 08:43:03.082388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.381 [2024-11-27 08:43:03.085548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.381 [2024-11-27 08:43:03.085611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:06.381 BaseBdev2 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.381 [2024-11-27 08:43:03.090468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.381 [2024-11-27 08:43:03.093146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.381 [2024-11-27 08:43:03.093447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:06.381 [2024-11-27 08:43:03.093471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:06.381 [2024-11-27 08:43:03.093798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:06.381 [2024-11-27 08:43:03.094050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:06.381 [2024-11-27 08:43:03.094068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:06.381 [2024-11-27 08:43:03.094275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.381 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.640 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.641 "name": "raid_bdev1", 00:11:06.641 "uuid": "80c1f1b2-3624-42db-9647-8cdd86f7936c", 00:11:06.641 "strip_size_kb": 0, 00:11:06.641 "state": "online", 00:11:06.641 "raid_level": "raid1", 00:11:06.641 "superblock": true, 00:11:06.641 "num_base_bdevs": 2, 00:11:06.641 "num_base_bdevs_discovered": 2, 00:11:06.641 "num_base_bdevs_operational": 2, 00:11:06.641 "base_bdevs_list": [ 00:11:06.641 { 00:11:06.641 "name": "BaseBdev1", 00:11:06.641 "uuid": "7d08cb53-7e11-578a-8326-8f480ebe02ce", 00:11:06.641 "is_configured": true, 00:11:06.641 "data_offset": 2048, 00:11:06.641 "data_size": 63488 00:11:06.641 }, 00:11:06.641 { 00:11:06.641 "name": "BaseBdev2", 00:11:06.641 "uuid": "cd1b10b4-4efc-5d59-befb-bb28365cf63b", 00:11:06.641 "is_configured": true, 00:11:06.641 "data_offset": 2048, 00:11:06.641 "data_size": 63488 00:11:06.641 } 00:11:06.641 ] 00:11:06.641 }' 00:11:06.641 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.641 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.899 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:06.899 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:07.157 [2024-11-27 08:43:03.772232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.093 [2024-11-27 08:43:04.646736] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:08.093 [2024-11-27 08:43:04.646842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.093 [2024-11-27 08:43:04.647121] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.093 "name": "raid_bdev1", 00:11:08.093 "uuid": "80c1f1b2-3624-42db-9647-8cdd86f7936c", 00:11:08.093 "strip_size_kb": 0, 00:11:08.093 "state": "online", 00:11:08.093 "raid_level": "raid1", 00:11:08.093 "superblock": true, 00:11:08.093 "num_base_bdevs": 2, 00:11:08.093 "num_base_bdevs_discovered": 1, 00:11:08.093 "num_base_bdevs_operational": 1, 00:11:08.093 "base_bdevs_list": [ 00:11:08.093 { 00:11:08.093 "name": null, 00:11:08.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.093 "is_configured": false, 00:11:08.093 "data_offset": 0, 00:11:08.093 "data_size": 63488 00:11:08.093 }, 00:11:08.093 { 00:11:08.093 "name": "BaseBdev2", 00:11:08.093 "uuid": "cd1b10b4-4efc-5d59-befb-bb28365cf63b", 00:11:08.093 "is_configured": true, 00:11:08.093 "data_offset": 2048, 00:11:08.093 "data_size": 63488 00:11:08.093 } 00:11:08.093 ] 00:11:08.093 }' 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.093 08:43:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.660 08:43:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:08.660 08:43:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.660 08:43:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.660 [2024-11-27 08:43:05.182212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.660 [2024-11-27 08:43:05.182254] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.660 [2024-11-27 08:43:05.185685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.660 [2024-11-27 08:43:05.185746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.660 [2024-11-27 08:43:05.185848] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.660 [2024-11-27 08:43:05.185864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:08.660 { 00:11:08.660 "results": [ 00:11:08.660 { 00:11:08.660 "job": "raid_bdev1", 00:11:08.660 "core_mask": "0x1", 00:11:08.660 "workload": "randrw", 00:11:08.660 "percentage": 50, 00:11:08.660 "status": "finished", 00:11:08.660 "queue_depth": 1, 00:11:08.660 "io_size": 131072, 00:11:08.660 "runtime": 1.407228, 00:11:08.660 "iops": 12973.02213998016, 00:11:08.660 "mibps": 1621.62776749752, 00:11:08.660 "io_failed": 0, 00:11:08.660 "io_timeout": 0, 00:11:08.660 "avg_latency_us": 72.74286989084536, 00:11:08.660 "min_latency_us": 40.72727272727273, 00:11:08.660 "max_latency_us": 2025.658181818182 00:11:08.660 } 00:11:08.660 ], 00:11:08.660 "core_count": 1 00:11:08.660 } 00:11:08.660 08:43:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.660 08:43:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63781 00:11:08.660 08:43:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' -z 63781 ']' 00:11:08.660 08:43:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # kill -0 63781 00:11:08.660 08:43:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # uname 00:11:08.660 08:43:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:11:08.660 08:43:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 63781 00:11:08.660 08:43:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:11:08.660 08:43:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:11:08.660 08:43:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 63781' 00:11:08.660 killing process with pid 63781 00:11:08.660 08:43:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # kill 63781 00:11:08.660 [2024-11-27 08:43:05.222580] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:08.660 08:43:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@975 -- # wait 63781 00:11:08.660 [2024-11-27 08:43:05.354958] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:10.056 08:43:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mmSzs7VS5b 00:11:10.056 08:43:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:10.056 08:43:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:10.056 08:43:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:10.056 08:43:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:10.056 08:43:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:10.056 08:43:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:10.056 08:43:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:10.056 00:11:10.056 real 0m4.741s 00:11:10.056 user 0m5.858s 00:11:10.056 sys 0m0.689s 00:11:10.056 08:43:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:11:10.056 08:43:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.056 ************************************ 00:11:10.056 END TEST raid_write_error_test 00:11:10.057 ************************************ 00:11:10.057 08:43:06 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:10.057 08:43:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:10.057 08:43:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:10.057 08:43:06 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:11:10.057 08:43:06 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:11:10.057 08:43:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:10.057 ************************************ 00:11:10.057 START TEST raid_state_function_test 00:11:10.057 ************************************ 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # raid_state_function_test raid0 3 false 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63919 00:11:10.057 Process raid pid: 63919 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63919' 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63919 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # '[' -z 63919 ']' 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:11:10.057 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.057 [2024-11-27 08:43:06.723573] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:11:10.057 [2024-11-27 08:43:06.723738] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.316 [2024-11-27 08:43:06.904867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.316 [2024-11-27 08:43:07.063306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.575 [2024-11-27 08:43:07.305188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.575 [2024-11-27 08:43:07.305268] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@865 -- # return 0 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.142 [2024-11-27 08:43:07.753806] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.142 [2024-11-27 08:43:07.753881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.142 [2024-11-27 08:43:07.753900] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.142 [2024-11-27 08:43:07.753918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.142 [2024-11-27 08:43:07.753929] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.142 [2024-11-27 08:43:07.753946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.142 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.143 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.143 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.143 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.143 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.143 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.143 "name": "Existed_Raid", 00:11:11.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.143 "strip_size_kb": 64, 00:11:11.143 "state": "configuring", 00:11:11.143 "raid_level": "raid0", 00:11:11.143 "superblock": false, 00:11:11.143 "num_base_bdevs": 3, 00:11:11.143 "num_base_bdevs_discovered": 0, 00:11:11.143 "num_base_bdevs_operational": 3, 00:11:11.143 "base_bdevs_list": [ 00:11:11.143 { 00:11:11.143 "name": "BaseBdev1", 00:11:11.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.143 "is_configured": false, 00:11:11.143 "data_offset": 0, 00:11:11.143 "data_size": 0 00:11:11.143 }, 00:11:11.143 { 00:11:11.143 "name": "BaseBdev2", 00:11:11.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.143 "is_configured": false, 00:11:11.143 "data_offset": 0, 00:11:11.143 "data_size": 0 00:11:11.143 }, 00:11:11.143 { 00:11:11.143 "name": "BaseBdev3", 00:11:11.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.143 "is_configured": false, 00:11:11.143 "data_offset": 0, 00:11:11.143 "data_size": 0 00:11:11.143 } 00:11:11.143 ] 00:11:11.143 }' 00:11:11.143 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.143 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.710 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.710 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.710 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.710 [2024-11-27 08:43:08.257920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.711 [2024-11-27 08:43:08.257990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.711 [2024-11-27 08:43:08.269882] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.711 [2024-11-27 08:43:08.269971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.711 [2024-11-27 08:43:08.269988] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.711 [2024-11-27 08:43:08.270005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.711 [2024-11-27 08:43:08.270016] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.711 [2024-11-27 08:43:08.270033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.711 [2024-11-27 08:43:08.320511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.711 BaseBdev1 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.711 [ 00:11:11.711 { 00:11:11.711 "name": "BaseBdev1", 00:11:11.711 "aliases": [ 00:11:11.711 "d697a75d-15f4-4d4c-aa49-06f52024ecf9" 00:11:11.711 ], 00:11:11.711 "product_name": "Malloc disk", 00:11:11.711 "block_size": 512, 00:11:11.711 "num_blocks": 65536, 00:11:11.711 "uuid": "d697a75d-15f4-4d4c-aa49-06f52024ecf9", 00:11:11.711 "assigned_rate_limits": { 00:11:11.711 "rw_ios_per_sec": 0, 00:11:11.711 "rw_mbytes_per_sec": 0, 00:11:11.711 "r_mbytes_per_sec": 0, 00:11:11.711 "w_mbytes_per_sec": 0 00:11:11.711 }, 00:11:11.711 "claimed": true, 00:11:11.711 "claim_type": "exclusive_write", 00:11:11.711 "zoned": false, 00:11:11.711 "supported_io_types": { 00:11:11.711 "read": true, 00:11:11.711 "write": true, 00:11:11.711 "unmap": true, 00:11:11.711 "flush": true, 00:11:11.711 "reset": true, 00:11:11.711 "nvme_admin": false, 00:11:11.711 "nvme_io": false, 00:11:11.711 "nvme_io_md": false, 00:11:11.711 "write_zeroes": true, 00:11:11.711 "zcopy": true, 00:11:11.711 "get_zone_info": false, 00:11:11.711 "zone_management": false, 00:11:11.711 "zone_append": false, 00:11:11.711 "compare": false, 00:11:11.711 "compare_and_write": false, 00:11:11.711 "abort": true, 00:11:11.711 "seek_hole": false, 00:11:11.711 "seek_data": false, 00:11:11.711 "copy": true, 00:11:11.711 "nvme_iov_md": false 00:11:11.711 }, 00:11:11.711 "memory_domains": [ 00:11:11.711 { 00:11:11.711 "dma_device_id": "system", 00:11:11.711 "dma_device_type": 1 00:11:11.711 }, 00:11:11.711 { 00:11:11.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.711 "dma_device_type": 2 00:11:11.711 } 00:11:11.711 ], 00:11:11.711 "driver_specific": {} 00:11:11.711 } 00:11:11.711 ] 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.711 "name": "Existed_Raid", 00:11:11.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.711 "strip_size_kb": 64, 00:11:11.711 "state": "configuring", 00:11:11.711 "raid_level": "raid0", 00:11:11.711 "superblock": false, 00:11:11.711 "num_base_bdevs": 3, 00:11:11.711 "num_base_bdevs_discovered": 1, 00:11:11.711 "num_base_bdevs_operational": 3, 00:11:11.711 "base_bdevs_list": [ 00:11:11.711 { 00:11:11.711 "name": "BaseBdev1", 00:11:11.711 "uuid": "d697a75d-15f4-4d4c-aa49-06f52024ecf9", 00:11:11.711 "is_configured": true, 00:11:11.711 "data_offset": 0, 00:11:11.711 "data_size": 65536 00:11:11.711 }, 00:11:11.711 { 00:11:11.711 "name": "BaseBdev2", 00:11:11.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.711 "is_configured": false, 00:11:11.711 "data_offset": 0, 00:11:11.711 "data_size": 0 00:11:11.711 }, 00:11:11.711 { 00:11:11.711 "name": "BaseBdev3", 00:11:11.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.711 "is_configured": false, 00:11:11.711 "data_offset": 0, 00:11:11.711 "data_size": 0 00:11:11.711 } 00:11:11.711 ] 00:11:11.711 }' 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.711 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.280 [2024-11-27 08:43:08.876822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.280 [2024-11-27 08:43:08.876920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.280 [2024-11-27 08:43:08.884853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.280 [2024-11-27 08:43:08.887556] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:12.280 [2024-11-27 08:43:08.887614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:12.280 [2024-11-27 08:43:08.887632] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:12.280 [2024-11-27 08:43:08.887650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.280 "name": "Existed_Raid", 00:11:12.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.280 "strip_size_kb": 64, 00:11:12.280 "state": "configuring", 00:11:12.280 "raid_level": "raid0", 00:11:12.280 "superblock": false, 00:11:12.280 "num_base_bdevs": 3, 00:11:12.280 "num_base_bdevs_discovered": 1, 00:11:12.280 "num_base_bdevs_operational": 3, 00:11:12.280 "base_bdevs_list": [ 00:11:12.280 { 00:11:12.280 "name": "BaseBdev1", 00:11:12.280 "uuid": "d697a75d-15f4-4d4c-aa49-06f52024ecf9", 00:11:12.280 "is_configured": true, 00:11:12.280 "data_offset": 0, 00:11:12.280 "data_size": 65536 00:11:12.280 }, 00:11:12.280 { 00:11:12.280 "name": "BaseBdev2", 00:11:12.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.280 "is_configured": false, 00:11:12.280 "data_offset": 0, 00:11:12.280 "data_size": 0 00:11:12.280 }, 00:11:12.280 { 00:11:12.280 "name": "BaseBdev3", 00:11:12.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.280 "is_configured": false, 00:11:12.280 "data_offset": 0, 00:11:12.280 "data_size": 0 00:11:12.280 } 00:11:12.280 ] 00:11:12.280 }' 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.280 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.857 [2024-11-27 08:43:09.480869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.857 BaseBdev2 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.857 [ 00:11:12.857 { 00:11:12.857 "name": "BaseBdev2", 00:11:12.857 "aliases": [ 00:11:12.857 "f9162227-1695-42f1-aeba-61b5cd2ab9e0" 00:11:12.857 ], 00:11:12.857 "product_name": "Malloc disk", 00:11:12.857 "block_size": 512, 00:11:12.857 "num_blocks": 65536, 00:11:12.857 "uuid": "f9162227-1695-42f1-aeba-61b5cd2ab9e0", 00:11:12.857 "assigned_rate_limits": { 00:11:12.857 "rw_ios_per_sec": 0, 00:11:12.857 "rw_mbytes_per_sec": 0, 00:11:12.857 "r_mbytes_per_sec": 0, 00:11:12.857 "w_mbytes_per_sec": 0 00:11:12.857 }, 00:11:12.857 "claimed": true, 00:11:12.857 "claim_type": "exclusive_write", 00:11:12.857 "zoned": false, 00:11:12.857 "supported_io_types": { 00:11:12.857 "read": true, 00:11:12.857 "write": true, 00:11:12.857 "unmap": true, 00:11:12.857 "flush": true, 00:11:12.857 "reset": true, 00:11:12.857 "nvme_admin": false, 00:11:12.857 "nvme_io": false, 00:11:12.857 "nvme_io_md": false, 00:11:12.857 "write_zeroes": true, 00:11:12.857 "zcopy": true, 00:11:12.857 "get_zone_info": false, 00:11:12.857 "zone_management": false, 00:11:12.857 "zone_append": false, 00:11:12.857 "compare": false, 00:11:12.857 "compare_and_write": false, 00:11:12.857 "abort": true, 00:11:12.857 "seek_hole": false, 00:11:12.857 "seek_data": false, 00:11:12.857 "copy": true, 00:11:12.857 "nvme_iov_md": false 00:11:12.857 }, 00:11:12.857 "memory_domains": [ 00:11:12.857 { 00:11:12.857 "dma_device_id": "system", 00:11:12.857 "dma_device_type": 1 00:11:12.857 }, 00:11:12.857 { 00:11:12.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.857 "dma_device_type": 2 00:11:12.857 } 00:11:12.857 ], 00:11:12.857 "driver_specific": {} 00:11:12.857 } 00:11:12.857 ] 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.857 "name": "Existed_Raid", 00:11:12.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.857 "strip_size_kb": 64, 00:11:12.857 "state": "configuring", 00:11:12.857 "raid_level": "raid0", 00:11:12.857 "superblock": false, 00:11:12.857 "num_base_bdevs": 3, 00:11:12.857 "num_base_bdevs_discovered": 2, 00:11:12.857 "num_base_bdevs_operational": 3, 00:11:12.857 "base_bdevs_list": [ 00:11:12.857 { 00:11:12.857 "name": "BaseBdev1", 00:11:12.857 "uuid": "d697a75d-15f4-4d4c-aa49-06f52024ecf9", 00:11:12.857 "is_configured": true, 00:11:12.857 "data_offset": 0, 00:11:12.857 "data_size": 65536 00:11:12.857 }, 00:11:12.857 { 00:11:12.857 "name": "BaseBdev2", 00:11:12.857 "uuid": "f9162227-1695-42f1-aeba-61b5cd2ab9e0", 00:11:12.857 "is_configured": true, 00:11:12.857 "data_offset": 0, 00:11:12.857 "data_size": 65536 00:11:12.857 }, 00:11:12.857 { 00:11:12.857 "name": "BaseBdev3", 00:11:12.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.857 "is_configured": false, 00:11:12.857 "data_offset": 0, 00:11:12.857 "data_size": 0 00:11:12.857 } 00:11:12.857 ] 00:11:12.857 }' 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.857 08:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.426 [2024-11-27 08:43:10.071197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:13.426 [2024-11-27 08:43:10.071288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:13.426 [2024-11-27 08:43:10.071315] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:13.426 [2024-11-27 08:43:10.071744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:13.426 [2024-11-27 08:43:10.072038] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:13.426 [2024-11-27 08:43:10.072067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:13.426 [2024-11-27 08:43:10.072472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.426 BaseBdev3 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.426 [ 00:11:13.426 { 00:11:13.426 "name": "BaseBdev3", 00:11:13.426 "aliases": [ 00:11:13.426 "252e9efa-c53f-4ba2-9e31-d65ccea98bd8" 00:11:13.426 ], 00:11:13.426 "product_name": "Malloc disk", 00:11:13.426 "block_size": 512, 00:11:13.426 "num_blocks": 65536, 00:11:13.426 "uuid": "252e9efa-c53f-4ba2-9e31-d65ccea98bd8", 00:11:13.426 "assigned_rate_limits": { 00:11:13.426 "rw_ios_per_sec": 0, 00:11:13.426 "rw_mbytes_per_sec": 0, 00:11:13.426 "r_mbytes_per_sec": 0, 00:11:13.426 "w_mbytes_per_sec": 0 00:11:13.426 }, 00:11:13.426 "claimed": true, 00:11:13.426 "claim_type": "exclusive_write", 00:11:13.426 "zoned": false, 00:11:13.426 "supported_io_types": { 00:11:13.426 "read": true, 00:11:13.426 "write": true, 00:11:13.426 "unmap": true, 00:11:13.426 "flush": true, 00:11:13.426 "reset": true, 00:11:13.426 "nvme_admin": false, 00:11:13.426 "nvme_io": false, 00:11:13.426 "nvme_io_md": false, 00:11:13.426 "write_zeroes": true, 00:11:13.426 "zcopy": true, 00:11:13.426 "get_zone_info": false, 00:11:13.426 "zone_management": false, 00:11:13.426 "zone_append": false, 00:11:13.426 "compare": false, 00:11:13.426 "compare_and_write": false, 00:11:13.426 "abort": true, 00:11:13.426 "seek_hole": false, 00:11:13.426 "seek_data": false, 00:11:13.426 "copy": true, 00:11:13.426 "nvme_iov_md": false 00:11:13.426 }, 00:11:13.426 "memory_domains": [ 00:11:13.426 { 00:11:13.426 "dma_device_id": "system", 00:11:13.426 "dma_device_type": 1 00:11:13.426 }, 00:11:13.426 { 00:11:13.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.426 "dma_device_type": 2 00:11:13.426 } 00:11:13.426 ], 00:11:13.426 "driver_specific": {} 00:11:13.426 } 00:11:13.426 ] 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.426 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.427 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.427 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.427 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.427 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.427 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.427 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.427 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.427 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.427 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.427 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.427 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.427 "name": "Existed_Raid", 00:11:13.427 "uuid": "3bf79944-7faf-454d-8e28-fa0e14bd0327", 00:11:13.427 "strip_size_kb": 64, 00:11:13.427 "state": "online", 00:11:13.427 "raid_level": "raid0", 00:11:13.427 "superblock": false, 00:11:13.427 "num_base_bdevs": 3, 00:11:13.427 "num_base_bdevs_discovered": 3, 00:11:13.427 "num_base_bdevs_operational": 3, 00:11:13.427 "base_bdevs_list": [ 00:11:13.427 { 00:11:13.427 "name": "BaseBdev1", 00:11:13.427 "uuid": "d697a75d-15f4-4d4c-aa49-06f52024ecf9", 00:11:13.427 "is_configured": true, 00:11:13.427 "data_offset": 0, 00:11:13.427 "data_size": 65536 00:11:13.427 }, 00:11:13.427 { 00:11:13.427 "name": "BaseBdev2", 00:11:13.427 "uuid": "f9162227-1695-42f1-aeba-61b5cd2ab9e0", 00:11:13.427 "is_configured": true, 00:11:13.427 "data_offset": 0, 00:11:13.427 "data_size": 65536 00:11:13.427 }, 00:11:13.427 { 00:11:13.427 "name": "BaseBdev3", 00:11:13.427 "uuid": "252e9efa-c53f-4ba2-9e31-d65ccea98bd8", 00:11:13.427 "is_configured": true, 00:11:13.427 "data_offset": 0, 00:11:13.427 "data_size": 65536 00:11:13.427 } 00:11:13.427 ] 00:11:13.427 }' 00:11:13.427 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.427 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.995 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.995 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:13.995 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:13.995 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:13.995 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:13.995 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:13.995 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:13.995 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.995 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.995 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:13.995 [2024-11-27 08:43:10.644026] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.995 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.995 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.995 "name": "Existed_Raid", 00:11:13.995 "aliases": [ 00:11:13.995 "3bf79944-7faf-454d-8e28-fa0e14bd0327" 00:11:13.995 ], 00:11:13.995 "product_name": "Raid Volume", 00:11:13.995 "block_size": 512, 00:11:13.995 "num_blocks": 196608, 00:11:13.995 "uuid": "3bf79944-7faf-454d-8e28-fa0e14bd0327", 00:11:13.995 "assigned_rate_limits": { 00:11:13.995 "rw_ios_per_sec": 0, 00:11:13.995 "rw_mbytes_per_sec": 0, 00:11:13.995 "r_mbytes_per_sec": 0, 00:11:13.995 "w_mbytes_per_sec": 0 00:11:13.995 }, 00:11:13.995 "claimed": false, 00:11:13.995 "zoned": false, 00:11:13.995 "supported_io_types": { 00:11:13.995 "read": true, 00:11:13.995 "write": true, 00:11:13.995 "unmap": true, 00:11:13.995 "flush": true, 00:11:13.995 "reset": true, 00:11:13.995 "nvme_admin": false, 00:11:13.995 "nvme_io": false, 00:11:13.995 "nvme_io_md": false, 00:11:13.995 "write_zeroes": true, 00:11:13.995 "zcopy": false, 00:11:13.995 "get_zone_info": false, 00:11:13.995 "zone_management": false, 00:11:13.995 "zone_append": false, 00:11:13.995 "compare": false, 00:11:13.995 "compare_and_write": false, 00:11:13.995 "abort": false, 00:11:13.995 "seek_hole": false, 00:11:13.995 "seek_data": false, 00:11:13.995 "copy": false, 00:11:13.995 "nvme_iov_md": false 00:11:13.995 }, 00:11:13.995 "memory_domains": [ 00:11:13.995 { 00:11:13.995 "dma_device_id": "system", 00:11:13.995 "dma_device_type": 1 00:11:13.995 }, 00:11:13.995 { 00:11:13.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.995 "dma_device_type": 2 00:11:13.995 }, 00:11:13.995 { 00:11:13.995 "dma_device_id": "system", 00:11:13.995 "dma_device_type": 1 00:11:13.995 }, 00:11:13.995 { 00:11:13.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.995 "dma_device_type": 2 00:11:13.995 }, 00:11:13.995 { 00:11:13.995 "dma_device_id": "system", 00:11:13.995 "dma_device_type": 1 00:11:13.995 }, 00:11:13.995 { 00:11:13.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.995 "dma_device_type": 2 00:11:13.995 } 00:11:13.995 ], 00:11:13.995 "driver_specific": { 00:11:13.995 "raid": { 00:11:13.995 "uuid": "3bf79944-7faf-454d-8e28-fa0e14bd0327", 00:11:13.995 "strip_size_kb": 64, 00:11:13.995 "state": "online", 00:11:13.995 "raid_level": "raid0", 00:11:13.995 "superblock": false, 00:11:13.995 "num_base_bdevs": 3, 00:11:13.995 "num_base_bdevs_discovered": 3, 00:11:13.995 "num_base_bdevs_operational": 3, 00:11:13.995 "base_bdevs_list": [ 00:11:13.995 { 00:11:13.995 "name": "BaseBdev1", 00:11:13.995 "uuid": "d697a75d-15f4-4d4c-aa49-06f52024ecf9", 00:11:13.995 "is_configured": true, 00:11:13.995 "data_offset": 0, 00:11:13.995 "data_size": 65536 00:11:13.995 }, 00:11:13.995 { 00:11:13.996 "name": "BaseBdev2", 00:11:13.996 "uuid": "f9162227-1695-42f1-aeba-61b5cd2ab9e0", 00:11:13.996 "is_configured": true, 00:11:13.996 "data_offset": 0, 00:11:13.996 "data_size": 65536 00:11:13.996 }, 00:11:13.996 { 00:11:13.996 "name": "BaseBdev3", 00:11:13.996 "uuid": "252e9efa-c53f-4ba2-9e31-d65ccea98bd8", 00:11:13.996 "is_configured": true, 00:11:13.996 "data_offset": 0, 00:11:13.996 "data_size": 65536 00:11:13.996 } 00:11:13.996 ] 00:11:13.996 } 00:11:13.996 } 00:11:13.996 }' 00:11:13.996 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.996 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:13.996 BaseBdev2 00:11:13.996 BaseBdev3' 00:11:13.996 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.254 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:14.254 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.255 08:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.255 [2024-11-27 08:43:10.951670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:14.255 [2024-11-27 08:43:10.951710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.255 [2024-11-27 08:43:10.951803] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.514 "name": "Existed_Raid", 00:11:14.514 "uuid": "3bf79944-7faf-454d-8e28-fa0e14bd0327", 00:11:14.514 "strip_size_kb": 64, 00:11:14.514 "state": "offline", 00:11:14.514 "raid_level": "raid0", 00:11:14.514 "superblock": false, 00:11:14.514 "num_base_bdevs": 3, 00:11:14.514 "num_base_bdevs_discovered": 2, 00:11:14.514 "num_base_bdevs_operational": 2, 00:11:14.514 "base_bdevs_list": [ 00:11:14.514 { 00:11:14.514 "name": null, 00:11:14.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.514 "is_configured": false, 00:11:14.514 "data_offset": 0, 00:11:14.514 "data_size": 65536 00:11:14.514 }, 00:11:14.514 { 00:11:14.514 "name": "BaseBdev2", 00:11:14.514 "uuid": "f9162227-1695-42f1-aeba-61b5cd2ab9e0", 00:11:14.514 "is_configured": true, 00:11:14.514 "data_offset": 0, 00:11:14.514 "data_size": 65536 00:11:14.514 }, 00:11:14.514 { 00:11:14.514 "name": "BaseBdev3", 00:11:14.514 "uuid": "252e9efa-c53f-4ba2-9e31-d65ccea98bd8", 00:11:14.514 "is_configured": true, 00:11:14.514 "data_offset": 0, 00:11:14.514 "data_size": 65536 00:11:14.514 } 00:11:14.514 ] 00:11:14.514 }' 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.514 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.081 [2024-11-27 08:43:11.631617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.081 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.081 [2024-11-27 08:43:11.777970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:15.081 [2024-11-27 08:43:11.778046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.341 BaseBdev2 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.341 08:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.341 [ 00:11:15.341 { 00:11:15.341 "name": "BaseBdev2", 00:11:15.341 "aliases": [ 00:11:15.341 "30adb9fc-a462-4183-b823-4ad647aebfa8" 00:11:15.341 ], 00:11:15.341 "product_name": "Malloc disk", 00:11:15.341 "block_size": 512, 00:11:15.341 "num_blocks": 65536, 00:11:15.341 "uuid": "30adb9fc-a462-4183-b823-4ad647aebfa8", 00:11:15.341 "assigned_rate_limits": { 00:11:15.341 "rw_ios_per_sec": 0, 00:11:15.341 "rw_mbytes_per_sec": 0, 00:11:15.341 "r_mbytes_per_sec": 0, 00:11:15.341 "w_mbytes_per_sec": 0 00:11:15.341 }, 00:11:15.341 "claimed": false, 00:11:15.341 "zoned": false, 00:11:15.341 "supported_io_types": { 00:11:15.341 "read": true, 00:11:15.341 "write": true, 00:11:15.341 "unmap": true, 00:11:15.341 "flush": true, 00:11:15.341 "reset": true, 00:11:15.341 "nvme_admin": false, 00:11:15.341 "nvme_io": false, 00:11:15.341 "nvme_io_md": false, 00:11:15.341 "write_zeroes": true, 00:11:15.341 "zcopy": true, 00:11:15.341 "get_zone_info": false, 00:11:15.341 "zone_management": false, 00:11:15.341 "zone_append": false, 00:11:15.341 "compare": false, 00:11:15.341 "compare_and_write": false, 00:11:15.341 "abort": true, 00:11:15.341 "seek_hole": false, 00:11:15.341 "seek_data": false, 00:11:15.341 "copy": true, 00:11:15.341 "nvme_iov_md": false 00:11:15.341 }, 00:11:15.341 "memory_domains": [ 00:11:15.341 { 00:11:15.341 "dma_device_id": "system", 00:11:15.341 "dma_device_type": 1 00:11:15.341 }, 00:11:15.341 { 00:11:15.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.341 "dma_device_type": 2 00:11:15.341 } 00:11:15.341 ], 00:11:15.341 "driver_specific": {} 00:11:15.341 } 00:11:15.341 ] 00:11:15.341 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.341 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:11:15.341 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:15.341 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.341 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.342 BaseBdev3 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.342 [ 00:11:15.342 { 00:11:15.342 "name": "BaseBdev3", 00:11:15.342 "aliases": [ 00:11:15.342 "35af960a-6a88-4328-9365-5e10623002e6" 00:11:15.342 ], 00:11:15.342 "product_name": "Malloc disk", 00:11:15.342 "block_size": 512, 00:11:15.342 "num_blocks": 65536, 00:11:15.342 "uuid": "35af960a-6a88-4328-9365-5e10623002e6", 00:11:15.342 "assigned_rate_limits": { 00:11:15.342 "rw_ios_per_sec": 0, 00:11:15.342 "rw_mbytes_per_sec": 0, 00:11:15.342 "r_mbytes_per_sec": 0, 00:11:15.342 "w_mbytes_per_sec": 0 00:11:15.342 }, 00:11:15.342 "claimed": false, 00:11:15.342 "zoned": false, 00:11:15.342 "supported_io_types": { 00:11:15.342 "read": true, 00:11:15.342 "write": true, 00:11:15.342 "unmap": true, 00:11:15.342 "flush": true, 00:11:15.342 "reset": true, 00:11:15.342 "nvme_admin": false, 00:11:15.342 "nvme_io": false, 00:11:15.342 "nvme_io_md": false, 00:11:15.342 "write_zeroes": true, 00:11:15.342 "zcopy": true, 00:11:15.342 "get_zone_info": false, 00:11:15.342 "zone_management": false, 00:11:15.342 "zone_append": false, 00:11:15.342 "compare": false, 00:11:15.342 "compare_and_write": false, 00:11:15.342 "abort": true, 00:11:15.342 "seek_hole": false, 00:11:15.342 "seek_data": false, 00:11:15.342 "copy": true, 00:11:15.342 "nvme_iov_md": false 00:11:15.342 }, 00:11:15.342 "memory_domains": [ 00:11:15.342 { 00:11:15.342 "dma_device_id": "system", 00:11:15.342 "dma_device_type": 1 00:11:15.342 }, 00:11:15.342 { 00:11:15.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.342 "dma_device_type": 2 00:11:15.342 } 00:11:15.342 ], 00:11:15.342 "driver_specific": {} 00:11:15.342 } 00:11:15.342 ] 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.342 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.604 [2024-11-27 08:43:12.103987] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.604 [2024-11-27 08:43:12.104048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.604 [2024-11-27 08:43:12.104083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.604 [2024-11-27 08:43:12.106784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.604 "name": "Existed_Raid", 00:11:15.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.604 "strip_size_kb": 64, 00:11:15.604 "state": "configuring", 00:11:15.604 "raid_level": "raid0", 00:11:15.604 "superblock": false, 00:11:15.604 "num_base_bdevs": 3, 00:11:15.604 "num_base_bdevs_discovered": 2, 00:11:15.604 "num_base_bdevs_operational": 3, 00:11:15.604 "base_bdevs_list": [ 00:11:15.604 { 00:11:15.604 "name": "BaseBdev1", 00:11:15.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.604 "is_configured": false, 00:11:15.604 "data_offset": 0, 00:11:15.604 "data_size": 0 00:11:15.604 }, 00:11:15.604 { 00:11:15.604 "name": "BaseBdev2", 00:11:15.604 "uuid": "30adb9fc-a462-4183-b823-4ad647aebfa8", 00:11:15.604 "is_configured": true, 00:11:15.604 "data_offset": 0, 00:11:15.604 "data_size": 65536 00:11:15.604 }, 00:11:15.604 { 00:11:15.604 "name": "BaseBdev3", 00:11:15.604 "uuid": "35af960a-6a88-4328-9365-5e10623002e6", 00:11:15.604 "is_configured": true, 00:11:15.604 "data_offset": 0, 00:11:15.604 "data_size": 65536 00:11:15.604 } 00:11:15.604 ] 00:11:15.604 }' 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.604 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.171 [2024-11-27 08:43:12.636170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.171 "name": "Existed_Raid", 00:11:16.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.171 "strip_size_kb": 64, 00:11:16.171 "state": "configuring", 00:11:16.171 "raid_level": "raid0", 00:11:16.171 "superblock": false, 00:11:16.171 "num_base_bdevs": 3, 00:11:16.171 "num_base_bdevs_discovered": 1, 00:11:16.171 "num_base_bdevs_operational": 3, 00:11:16.171 "base_bdevs_list": [ 00:11:16.171 { 00:11:16.171 "name": "BaseBdev1", 00:11:16.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.171 "is_configured": false, 00:11:16.171 "data_offset": 0, 00:11:16.171 "data_size": 0 00:11:16.171 }, 00:11:16.171 { 00:11:16.171 "name": null, 00:11:16.171 "uuid": "30adb9fc-a462-4183-b823-4ad647aebfa8", 00:11:16.171 "is_configured": false, 00:11:16.171 "data_offset": 0, 00:11:16.171 "data_size": 65536 00:11:16.171 }, 00:11:16.171 { 00:11:16.171 "name": "BaseBdev3", 00:11:16.171 "uuid": "35af960a-6a88-4328-9365-5e10623002e6", 00:11:16.171 "is_configured": true, 00:11:16.171 "data_offset": 0, 00:11:16.171 "data_size": 65536 00:11:16.171 } 00:11:16.171 ] 00:11:16.171 }' 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.171 08:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.430 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.430 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.430 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:16.430 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.690 [2024-11-27 08:43:13.265348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.690 BaseBdev1 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.690 [ 00:11:16.690 { 00:11:16.690 "name": "BaseBdev1", 00:11:16.690 "aliases": [ 00:11:16.690 "7f1c7659-62ab-49d6-968b-4edf48d300d2" 00:11:16.690 ], 00:11:16.690 "product_name": "Malloc disk", 00:11:16.690 "block_size": 512, 00:11:16.690 "num_blocks": 65536, 00:11:16.690 "uuid": "7f1c7659-62ab-49d6-968b-4edf48d300d2", 00:11:16.690 "assigned_rate_limits": { 00:11:16.690 "rw_ios_per_sec": 0, 00:11:16.690 "rw_mbytes_per_sec": 0, 00:11:16.690 "r_mbytes_per_sec": 0, 00:11:16.690 "w_mbytes_per_sec": 0 00:11:16.690 }, 00:11:16.690 "claimed": true, 00:11:16.690 "claim_type": "exclusive_write", 00:11:16.690 "zoned": false, 00:11:16.690 "supported_io_types": { 00:11:16.690 "read": true, 00:11:16.690 "write": true, 00:11:16.690 "unmap": true, 00:11:16.690 "flush": true, 00:11:16.690 "reset": true, 00:11:16.690 "nvme_admin": false, 00:11:16.690 "nvme_io": false, 00:11:16.690 "nvme_io_md": false, 00:11:16.690 "write_zeroes": true, 00:11:16.690 "zcopy": true, 00:11:16.690 "get_zone_info": false, 00:11:16.690 "zone_management": false, 00:11:16.690 "zone_append": false, 00:11:16.690 "compare": false, 00:11:16.690 "compare_and_write": false, 00:11:16.690 "abort": true, 00:11:16.690 "seek_hole": false, 00:11:16.690 "seek_data": false, 00:11:16.690 "copy": true, 00:11:16.690 "nvme_iov_md": false 00:11:16.690 }, 00:11:16.690 "memory_domains": [ 00:11:16.690 { 00:11:16.690 "dma_device_id": "system", 00:11:16.690 "dma_device_type": 1 00:11:16.690 }, 00:11:16.690 { 00:11:16.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.690 "dma_device_type": 2 00:11:16.690 } 00:11:16.690 ], 00:11:16.690 "driver_specific": {} 00:11:16.690 } 00:11:16.690 ] 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.690 "name": "Existed_Raid", 00:11:16.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.690 "strip_size_kb": 64, 00:11:16.690 "state": "configuring", 00:11:16.690 "raid_level": "raid0", 00:11:16.690 "superblock": false, 00:11:16.690 "num_base_bdevs": 3, 00:11:16.690 "num_base_bdevs_discovered": 2, 00:11:16.690 "num_base_bdevs_operational": 3, 00:11:16.690 "base_bdevs_list": [ 00:11:16.690 { 00:11:16.690 "name": "BaseBdev1", 00:11:16.690 "uuid": "7f1c7659-62ab-49d6-968b-4edf48d300d2", 00:11:16.690 "is_configured": true, 00:11:16.690 "data_offset": 0, 00:11:16.690 "data_size": 65536 00:11:16.690 }, 00:11:16.690 { 00:11:16.690 "name": null, 00:11:16.690 "uuid": "30adb9fc-a462-4183-b823-4ad647aebfa8", 00:11:16.690 "is_configured": false, 00:11:16.690 "data_offset": 0, 00:11:16.690 "data_size": 65536 00:11:16.690 }, 00:11:16.690 { 00:11:16.690 "name": "BaseBdev3", 00:11:16.690 "uuid": "35af960a-6a88-4328-9365-5e10623002e6", 00:11:16.690 "is_configured": true, 00:11:16.690 "data_offset": 0, 00:11:16.690 "data_size": 65536 00:11:16.690 } 00:11:16.690 ] 00:11:16.690 }' 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.690 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.258 [2024-11-27 08:43:13.949600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.258 08:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.516 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.516 "name": "Existed_Raid", 00:11:17.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.516 "strip_size_kb": 64, 00:11:17.516 "state": "configuring", 00:11:17.516 "raid_level": "raid0", 00:11:17.516 "superblock": false, 00:11:17.516 "num_base_bdevs": 3, 00:11:17.516 "num_base_bdevs_discovered": 1, 00:11:17.516 "num_base_bdevs_operational": 3, 00:11:17.516 "base_bdevs_list": [ 00:11:17.516 { 00:11:17.516 "name": "BaseBdev1", 00:11:17.516 "uuid": "7f1c7659-62ab-49d6-968b-4edf48d300d2", 00:11:17.516 "is_configured": true, 00:11:17.516 "data_offset": 0, 00:11:17.516 "data_size": 65536 00:11:17.517 }, 00:11:17.517 { 00:11:17.517 "name": null, 00:11:17.517 "uuid": "30adb9fc-a462-4183-b823-4ad647aebfa8", 00:11:17.517 "is_configured": false, 00:11:17.517 "data_offset": 0, 00:11:17.517 "data_size": 65536 00:11:17.517 }, 00:11:17.517 { 00:11:17.517 "name": null, 00:11:17.517 "uuid": "35af960a-6a88-4328-9365-5e10623002e6", 00:11:17.517 "is_configured": false, 00:11:17.517 "data_offset": 0, 00:11:17.517 "data_size": 65536 00:11:17.517 } 00:11:17.517 ] 00:11:17.517 }' 00:11:17.517 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.517 08:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.775 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.775 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:17.775 08:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.775 08:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.775 08:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.034 [2024-11-27 08:43:14.553899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.034 "name": "Existed_Raid", 00:11:18.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.034 "strip_size_kb": 64, 00:11:18.034 "state": "configuring", 00:11:18.034 "raid_level": "raid0", 00:11:18.034 "superblock": false, 00:11:18.034 "num_base_bdevs": 3, 00:11:18.034 "num_base_bdevs_discovered": 2, 00:11:18.034 "num_base_bdevs_operational": 3, 00:11:18.034 "base_bdevs_list": [ 00:11:18.034 { 00:11:18.034 "name": "BaseBdev1", 00:11:18.034 "uuid": "7f1c7659-62ab-49d6-968b-4edf48d300d2", 00:11:18.034 "is_configured": true, 00:11:18.034 "data_offset": 0, 00:11:18.034 "data_size": 65536 00:11:18.034 }, 00:11:18.034 { 00:11:18.034 "name": null, 00:11:18.034 "uuid": "30adb9fc-a462-4183-b823-4ad647aebfa8", 00:11:18.034 "is_configured": false, 00:11:18.034 "data_offset": 0, 00:11:18.034 "data_size": 65536 00:11:18.034 }, 00:11:18.034 { 00:11:18.034 "name": "BaseBdev3", 00:11:18.034 "uuid": "35af960a-6a88-4328-9365-5e10623002e6", 00:11:18.034 "is_configured": true, 00:11:18.034 "data_offset": 0, 00:11:18.034 "data_size": 65536 00:11:18.034 } 00:11:18.034 ] 00:11:18.034 }' 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.034 08:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.603 [2024-11-27 08:43:15.142154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.603 "name": "Existed_Raid", 00:11:18.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.603 "strip_size_kb": 64, 00:11:18.603 "state": "configuring", 00:11:18.603 "raid_level": "raid0", 00:11:18.603 "superblock": false, 00:11:18.603 "num_base_bdevs": 3, 00:11:18.603 "num_base_bdevs_discovered": 1, 00:11:18.603 "num_base_bdevs_operational": 3, 00:11:18.603 "base_bdevs_list": [ 00:11:18.603 { 00:11:18.603 "name": null, 00:11:18.603 "uuid": "7f1c7659-62ab-49d6-968b-4edf48d300d2", 00:11:18.603 "is_configured": false, 00:11:18.603 "data_offset": 0, 00:11:18.603 "data_size": 65536 00:11:18.603 }, 00:11:18.603 { 00:11:18.603 "name": null, 00:11:18.603 "uuid": "30adb9fc-a462-4183-b823-4ad647aebfa8", 00:11:18.603 "is_configured": false, 00:11:18.603 "data_offset": 0, 00:11:18.603 "data_size": 65536 00:11:18.603 }, 00:11:18.603 { 00:11:18.603 "name": "BaseBdev3", 00:11:18.603 "uuid": "35af960a-6a88-4328-9365-5e10623002e6", 00:11:18.603 "is_configured": true, 00:11:18.603 "data_offset": 0, 00:11:18.603 "data_size": 65536 00:11:18.603 } 00:11:18.603 ] 00:11:18.603 }' 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.603 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 [2024-11-27 08:43:15.799230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.220 "name": "Existed_Raid", 00:11:19.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.220 "strip_size_kb": 64, 00:11:19.220 "state": "configuring", 00:11:19.220 "raid_level": "raid0", 00:11:19.220 "superblock": false, 00:11:19.220 "num_base_bdevs": 3, 00:11:19.220 "num_base_bdevs_discovered": 2, 00:11:19.220 "num_base_bdevs_operational": 3, 00:11:19.220 "base_bdevs_list": [ 00:11:19.220 { 00:11:19.220 "name": null, 00:11:19.220 "uuid": "7f1c7659-62ab-49d6-968b-4edf48d300d2", 00:11:19.220 "is_configured": false, 00:11:19.220 "data_offset": 0, 00:11:19.220 "data_size": 65536 00:11:19.220 }, 00:11:19.220 { 00:11:19.220 "name": "BaseBdev2", 00:11:19.220 "uuid": "30adb9fc-a462-4183-b823-4ad647aebfa8", 00:11:19.220 "is_configured": true, 00:11:19.220 "data_offset": 0, 00:11:19.220 "data_size": 65536 00:11:19.220 }, 00:11:19.220 { 00:11:19.220 "name": "BaseBdev3", 00:11:19.220 "uuid": "35af960a-6a88-4328-9365-5e10623002e6", 00:11:19.220 "is_configured": true, 00:11:19.220 "data_offset": 0, 00:11:19.220 "data_size": 65536 00:11:19.220 } 00:11:19.220 ] 00:11:19.220 }' 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.220 08:43:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7f1c7659-62ab-49d6-968b-4edf48d300d2 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.801 [2024-11-27 08:43:16.501099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:19.801 [2024-11-27 08:43:16.501164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:19.801 [2024-11-27 08:43:16.501184] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:19.801 [2024-11-27 08:43:16.501573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:19.801 [2024-11-27 08:43:16.501793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:19.801 [2024-11-27 08:43:16.501818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:19.801 [2024-11-27 08:43:16.502172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.801 NewBaseBdev 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.801 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.801 [ 00:11:19.801 { 00:11:19.801 "name": "NewBaseBdev", 00:11:19.801 "aliases": [ 00:11:19.801 "7f1c7659-62ab-49d6-968b-4edf48d300d2" 00:11:19.801 ], 00:11:19.801 "product_name": "Malloc disk", 00:11:19.801 "block_size": 512, 00:11:19.801 "num_blocks": 65536, 00:11:19.801 "uuid": "7f1c7659-62ab-49d6-968b-4edf48d300d2", 00:11:19.801 "assigned_rate_limits": { 00:11:19.801 "rw_ios_per_sec": 0, 00:11:19.801 "rw_mbytes_per_sec": 0, 00:11:19.801 "r_mbytes_per_sec": 0, 00:11:19.801 "w_mbytes_per_sec": 0 00:11:19.801 }, 00:11:19.801 "claimed": true, 00:11:19.801 "claim_type": "exclusive_write", 00:11:19.801 "zoned": false, 00:11:19.801 "supported_io_types": { 00:11:19.801 "read": true, 00:11:19.801 "write": true, 00:11:19.801 "unmap": true, 00:11:19.801 "flush": true, 00:11:19.801 "reset": true, 00:11:19.801 "nvme_admin": false, 00:11:19.801 "nvme_io": false, 00:11:19.801 "nvme_io_md": false, 00:11:19.801 "write_zeroes": true, 00:11:19.801 "zcopy": true, 00:11:19.801 "get_zone_info": false, 00:11:19.801 "zone_management": false, 00:11:19.801 "zone_append": false, 00:11:19.801 "compare": false, 00:11:19.801 "compare_and_write": false, 00:11:19.801 "abort": true, 00:11:19.801 "seek_hole": false, 00:11:19.801 "seek_data": false, 00:11:19.801 "copy": true, 00:11:19.801 "nvme_iov_md": false 00:11:19.801 }, 00:11:19.801 "memory_domains": [ 00:11:19.801 { 00:11:19.801 "dma_device_id": "system", 00:11:19.801 "dma_device_type": 1 00:11:19.801 }, 00:11:19.801 { 00:11:19.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.801 "dma_device_type": 2 00:11:19.801 } 00:11:19.801 ], 00:11:19.801 "driver_specific": {} 00:11:19.801 } 00:11:19.801 ] 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.802 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.061 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.061 "name": "Existed_Raid", 00:11:20.061 "uuid": "fbcd7e6f-0828-4d4e-bfe3-5bba041bacc7", 00:11:20.061 "strip_size_kb": 64, 00:11:20.061 "state": "online", 00:11:20.061 "raid_level": "raid0", 00:11:20.061 "superblock": false, 00:11:20.061 "num_base_bdevs": 3, 00:11:20.061 "num_base_bdevs_discovered": 3, 00:11:20.061 "num_base_bdevs_operational": 3, 00:11:20.061 "base_bdevs_list": [ 00:11:20.061 { 00:11:20.061 "name": "NewBaseBdev", 00:11:20.061 "uuid": "7f1c7659-62ab-49d6-968b-4edf48d300d2", 00:11:20.061 "is_configured": true, 00:11:20.061 "data_offset": 0, 00:11:20.061 "data_size": 65536 00:11:20.061 }, 00:11:20.061 { 00:11:20.061 "name": "BaseBdev2", 00:11:20.061 "uuid": "30adb9fc-a462-4183-b823-4ad647aebfa8", 00:11:20.061 "is_configured": true, 00:11:20.061 "data_offset": 0, 00:11:20.061 "data_size": 65536 00:11:20.061 }, 00:11:20.061 { 00:11:20.061 "name": "BaseBdev3", 00:11:20.061 "uuid": "35af960a-6a88-4328-9365-5e10623002e6", 00:11:20.061 "is_configured": true, 00:11:20.061 "data_offset": 0, 00:11:20.061 "data_size": 65536 00:11:20.061 } 00:11:20.061 ] 00:11:20.061 }' 00:11:20.061 08:43:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.061 08:43:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.320 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:20.320 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:20.320 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:20.320 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:20.320 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:20.320 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:20.320 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:20.320 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:20.320 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.320 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.320 [2024-11-27 08:43:17.065758] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:20.580 "name": "Existed_Raid", 00:11:20.580 "aliases": [ 00:11:20.580 "fbcd7e6f-0828-4d4e-bfe3-5bba041bacc7" 00:11:20.580 ], 00:11:20.580 "product_name": "Raid Volume", 00:11:20.580 "block_size": 512, 00:11:20.580 "num_blocks": 196608, 00:11:20.580 "uuid": "fbcd7e6f-0828-4d4e-bfe3-5bba041bacc7", 00:11:20.580 "assigned_rate_limits": { 00:11:20.580 "rw_ios_per_sec": 0, 00:11:20.580 "rw_mbytes_per_sec": 0, 00:11:20.580 "r_mbytes_per_sec": 0, 00:11:20.580 "w_mbytes_per_sec": 0 00:11:20.580 }, 00:11:20.580 "claimed": false, 00:11:20.580 "zoned": false, 00:11:20.580 "supported_io_types": { 00:11:20.580 "read": true, 00:11:20.580 "write": true, 00:11:20.580 "unmap": true, 00:11:20.580 "flush": true, 00:11:20.580 "reset": true, 00:11:20.580 "nvme_admin": false, 00:11:20.580 "nvme_io": false, 00:11:20.580 "nvme_io_md": false, 00:11:20.580 "write_zeroes": true, 00:11:20.580 "zcopy": false, 00:11:20.580 "get_zone_info": false, 00:11:20.580 "zone_management": false, 00:11:20.580 "zone_append": false, 00:11:20.580 "compare": false, 00:11:20.580 "compare_and_write": false, 00:11:20.580 "abort": false, 00:11:20.580 "seek_hole": false, 00:11:20.580 "seek_data": false, 00:11:20.580 "copy": false, 00:11:20.580 "nvme_iov_md": false 00:11:20.580 }, 00:11:20.580 "memory_domains": [ 00:11:20.580 { 00:11:20.580 "dma_device_id": "system", 00:11:20.580 "dma_device_type": 1 00:11:20.580 }, 00:11:20.580 { 00:11:20.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.580 "dma_device_type": 2 00:11:20.580 }, 00:11:20.580 { 00:11:20.580 "dma_device_id": "system", 00:11:20.580 "dma_device_type": 1 00:11:20.580 }, 00:11:20.580 { 00:11:20.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.580 "dma_device_type": 2 00:11:20.580 }, 00:11:20.580 { 00:11:20.580 "dma_device_id": "system", 00:11:20.580 "dma_device_type": 1 00:11:20.580 }, 00:11:20.580 { 00:11:20.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.580 "dma_device_type": 2 00:11:20.580 } 00:11:20.580 ], 00:11:20.580 "driver_specific": { 00:11:20.580 "raid": { 00:11:20.580 "uuid": "fbcd7e6f-0828-4d4e-bfe3-5bba041bacc7", 00:11:20.580 "strip_size_kb": 64, 00:11:20.580 "state": "online", 00:11:20.580 "raid_level": "raid0", 00:11:20.580 "superblock": false, 00:11:20.580 "num_base_bdevs": 3, 00:11:20.580 "num_base_bdevs_discovered": 3, 00:11:20.580 "num_base_bdevs_operational": 3, 00:11:20.580 "base_bdevs_list": [ 00:11:20.580 { 00:11:20.580 "name": "NewBaseBdev", 00:11:20.580 "uuid": "7f1c7659-62ab-49d6-968b-4edf48d300d2", 00:11:20.580 "is_configured": true, 00:11:20.580 "data_offset": 0, 00:11:20.580 "data_size": 65536 00:11:20.580 }, 00:11:20.580 { 00:11:20.580 "name": "BaseBdev2", 00:11:20.580 "uuid": "30adb9fc-a462-4183-b823-4ad647aebfa8", 00:11:20.580 "is_configured": true, 00:11:20.580 "data_offset": 0, 00:11:20.580 "data_size": 65536 00:11:20.580 }, 00:11:20.580 { 00:11:20.580 "name": "BaseBdev3", 00:11:20.580 "uuid": "35af960a-6a88-4328-9365-5e10623002e6", 00:11:20.580 "is_configured": true, 00:11:20.580 "data_offset": 0, 00:11:20.580 "data_size": 65536 00:11:20.580 } 00:11:20.580 ] 00:11:20.580 } 00:11:20.580 } 00:11:20.580 }' 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:20.580 BaseBdev2 00:11:20.580 BaseBdev3' 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.580 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.840 [2024-11-27 08:43:17.389483] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:20.840 [2024-11-27 08:43:17.389523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.840 [2024-11-27 08:43:17.389652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.840 [2024-11-27 08:43:17.389741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.840 [2024-11-27 08:43:17.389765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63919 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' -z 63919 ']' 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # kill -0 63919 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # uname 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 63919 00:11:20.840 killing process with pid 63919 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 63919' 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # kill 63919 00:11:20.840 [2024-11-27 08:43:17.429518] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:20.840 08:43:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@975 -- # wait 63919 00:11:21.100 [2024-11-27 08:43:17.723819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.503 08:43:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:22.503 00:11:22.503 real 0m12.261s 00:11:22.503 user 0m20.139s 00:11:22.503 sys 0m1.784s 00:11:22.503 08:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:11:22.503 ************************************ 00:11:22.503 END TEST raid_state_function_test 00:11:22.504 ************************************ 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.504 08:43:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:11:22.504 08:43:18 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:11:22.504 08:43:18 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:11:22.504 08:43:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.504 ************************************ 00:11:22.504 START TEST raid_state_function_test_sb 00:11:22.504 ************************************ 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # raid_state_function_test raid0 3 true 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:22.504 Process raid pid: 64562 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64562 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64562' 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64562 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # '[' -z 64562 ']' 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local max_retries=100 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # xtrace_disable 00:11:22.504 08:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.504 [2024-11-27 08:43:19.062549] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:11:22.504 [2024-11-27 08:43:19.063014] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.504 [2024-11-27 08:43:19.252497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.762 [2024-11-27 08:43:19.408483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.020 [2024-11-27 08:43:19.637751] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.020 [2024-11-27 08:43:19.638056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@865 -- # return 0 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.321 [2024-11-27 08:43:20.054644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.321 [2024-11-27 08:43:20.054902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.321 [2024-11-27 08:43:20.054934] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.321 [2024-11-27 08:43:20.054955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.321 [2024-11-27 08:43:20.054966] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.321 [2024-11-27 08:43:20.054982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.321 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.579 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.579 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.579 "name": "Existed_Raid", 00:11:23.579 "uuid": "f1feedf5-f790-42bd-8bc6-758c10dd9e08", 00:11:23.579 "strip_size_kb": 64, 00:11:23.579 "state": "configuring", 00:11:23.579 "raid_level": "raid0", 00:11:23.579 "superblock": true, 00:11:23.579 "num_base_bdevs": 3, 00:11:23.579 "num_base_bdevs_discovered": 0, 00:11:23.579 "num_base_bdevs_operational": 3, 00:11:23.579 "base_bdevs_list": [ 00:11:23.579 { 00:11:23.579 "name": "BaseBdev1", 00:11:23.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.579 "is_configured": false, 00:11:23.579 "data_offset": 0, 00:11:23.579 "data_size": 0 00:11:23.579 }, 00:11:23.579 { 00:11:23.579 "name": "BaseBdev2", 00:11:23.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.579 "is_configured": false, 00:11:23.579 "data_offset": 0, 00:11:23.579 "data_size": 0 00:11:23.579 }, 00:11:23.579 { 00:11:23.579 "name": "BaseBdev3", 00:11:23.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.579 "is_configured": false, 00:11:23.579 "data_offset": 0, 00:11:23.579 "data_size": 0 00:11:23.579 } 00:11:23.579 ] 00:11:23.579 }' 00:11:23.579 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.579 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.837 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.837 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.837 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.837 [2024-11-27 08:43:20.578775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.837 [2024-11-27 08:43:20.578824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:23.837 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.837 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:23.837 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.837 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.837 [2024-11-27 08:43:20.586748] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.837 [2024-11-27 08:43:20.586825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.837 [2024-11-27 08:43:20.586842] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.837 [2024-11-27 08:43:20.586858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.837 [2024-11-27 08:43:20.586868] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.837 [2024-11-27 08:43:20.586882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.837 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.837 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:23.837 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.837 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.095 [2024-11-27 08:43:20.638175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.095 BaseBdev1 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.095 [ 00:11:24.095 { 00:11:24.095 "name": "BaseBdev1", 00:11:24.095 "aliases": [ 00:11:24.095 "be205954-eee0-4f65-9d62-9e5c45240262" 00:11:24.095 ], 00:11:24.095 "product_name": "Malloc disk", 00:11:24.095 "block_size": 512, 00:11:24.095 "num_blocks": 65536, 00:11:24.095 "uuid": "be205954-eee0-4f65-9d62-9e5c45240262", 00:11:24.095 "assigned_rate_limits": { 00:11:24.095 "rw_ios_per_sec": 0, 00:11:24.095 "rw_mbytes_per_sec": 0, 00:11:24.095 "r_mbytes_per_sec": 0, 00:11:24.095 "w_mbytes_per_sec": 0 00:11:24.095 }, 00:11:24.095 "claimed": true, 00:11:24.095 "claim_type": "exclusive_write", 00:11:24.095 "zoned": false, 00:11:24.095 "supported_io_types": { 00:11:24.095 "read": true, 00:11:24.095 "write": true, 00:11:24.095 "unmap": true, 00:11:24.095 "flush": true, 00:11:24.095 "reset": true, 00:11:24.095 "nvme_admin": false, 00:11:24.095 "nvme_io": false, 00:11:24.095 "nvme_io_md": false, 00:11:24.095 "write_zeroes": true, 00:11:24.095 "zcopy": true, 00:11:24.095 "get_zone_info": false, 00:11:24.095 "zone_management": false, 00:11:24.095 "zone_append": false, 00:11:24.095 "compare": false, 00:11:24.095 "compare_and_write": false, 00:11:24.095 "abort": true, 00:11:24.095 "seek_hole": false, 00:11:24.095 "seek_data": false, 00:11:24.095 "copy": true, 00:11:24.095 "nvme_iov_md": false 00:11:24.095 }, 00:11:24.095 "memory_domains": [ 00:11:24.095 { 00:11:24.095 "dma_device_id": "system", 00:11:24.095 "dma_device_type": 1 00:11:24.095 }, 00:11:24.095 { 00:11:24.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.095 "dma_device_type": 2 00:11:24.095 } 00:11:24.095 ], 00:11:24.095 "driver_specific": {} 00:11:24.095 } 00:11:24.095 ] 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.095 "name": "Existed_Raid", 00:11:24.095 "uuid": "ce66b82b-70e0-4229-934b-65f2b070c25d", 00:11:24.095 "strip_size_kb": 64, 00:11:24.095 "state": "configuring", 00:11:24.095 "raid_level": "raid0", 00:11:24.095 "superblock": true, 00:11:24.095 "num_base_bdevs": 3, 00:11:24.095 "num_base_bdevs_discovered": 1, 00:11:24.095 "num_base_bdevs_operational": 3, 00:11:24.095 "base_bdevs_list": [ 00:11:24.095 { 00:11:24.095 "name": "BaseBdev1", 00:11:24.095 "uuid": "be205954-eee0-4f65-9d62-9e5c45240262", 00:11:24.095 "is_configured": true, 00:11:24.095 "data_offset": 2048, 00:11:24.095 "data_size": 63488 00:11:24.095 }, 00:11:24.095 { 00:11:24.095 "name": "BaseBdev2", 00:11:24.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.095 "is_configured": false, 00:11:24.095 "data_offset": 0, 00:11:24.095 "data_size": 0 00:11:24.095 }, 00:11:24.095 { 00:11:24.095 "name": "BaseBdev3", 00:11:24.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.095 "is_configured": false, 00:11:24.095 "data_offset": 0, 00:11:24.095 "data_size": 0 00:11:24.095 } 00:11:24.095 ] 00:11:24.095 }' 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.095 08:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.661 [2024-11-27 08:43:21.243051] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.661 [2024-11-27 08:43:21.243385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.661 [2024-11-27 08:43:21.251110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.661 [2024-11-27 08:43:21.253792] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.661 [2024-11-27 08:43:21.253993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.661 [2024-11-27 08:43:21.254023] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.661 [2024-11-27 08:43:21.254042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.661 "name": "Existed_Raid", 00:11:24.661 "uuid": "c5ede665-1756-457d-a939-924dc1086f89", 00:11:24.661 "strip_size_kb": 64, 00:11:24.661 "state": "configuring", 00:11:24.661 "raid_level": "raid0", 00:11:24.661 "superblock": true, 00:11:24.661 "num_base_bdevs": 3, 00:11:24.661 "num_base_bdevs_discovered": 1, 00:11:24.661 "num_base_bdevs_operational": 3, 00:11:24.661 "base_bdevs_list": [ 00:11:24.661 { 00:11:24.661 "name": "BaseBdev1", 00:11:24.661 "uuid": "be205954-eee0-4f65-9d62-9e5c45240262", 00:11:24.661 "is_configured": true, 00:11:24.661 "data_offset": 2048, 00:11:24.661 "data_size": 63488 00:11:24.661 }, 00:11:24.661 { 00:11:24.661 "name": "BaseBdev2", 00:11:24.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.661 "is_configured": false, 00:11:24.661 "data_offset": 0, 00:11:24.661 "data_size": 0 00:11:24.661 }, 00:11:24.661 { 00:11:24.661 "name": "BaseBdev3", 00:11:24.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.661 "is_configured": false, 00:11:24.661 "data_offset": 0, 00:11:24.661 "data_size": 0 00:11:24.661 } 00:11:24.661 ] 00:11:24.661 }' 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.661 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.226 [2024-11-27 08:43:21.817594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.226 BaseBdev2 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.226 [ 00:11:25.226 { 00:11:25.226 "name": "BaseBdev2", 00:11:25.226 "aliases": [ 00:11:25.226 "fd3083d8-53b2-4da3-a039-ba3f09b80412" 00:11:25.226 ], 00:11:25.226 "product_name": "Malloc disk", 00:11:25.226 "block_size": 512, 00:11:25.226 "num_blocks": 65536, 00:11:25.226 "uuid": "fd3083d8-53b2-4da3-a039-ba3f09b80412", 00:11:25.226 "assigned_rate_limits": { 00:11:25.226 "rw_ios_per_sec": 0, 00:11:25.226 "rw_mbytes_per_sec": 0, 00:11:25.226 "r_mbytes_per_sec": 0, 00:11:25.226 "w_mbytes_per_sec": 0 00:11:25.226 }, 00:11:25.226 "claimed": true, 00:11:25.226 "claim_type": "exclusive_write", 00:11:25.226 "zoned": false, 00:11:25.226 "supported_io_types": { 00:11:25.226 "read": true, 00:11:25.226 "write": true, 00:11:25.226 "unmap": true, 00:11:25.226 "flush": true, 00:11:25.226 "reset": true, 00:11:25.226 "nvme_admin": false, 00:11:25.226 "nvme_io": false, 00:11:25.226 "nvme_io_md": false, 00:11:25.226 "write_zeroes": true, 00:11:25.226 "zcopy": true, 00:11:25.226 "get_zone_info": false, 00:11:25.226 "zone_management": false, 00:11:25.226 "zone_append": false, 00:11:25.226 "compare": false, 00:11:25.226 "compare_and_write": false, 00:11:25.226 "abort": true, 00:11:25.226 "seek_hole": false, 00:11:25.226 "seek_data": false, 00:11:25.226 "copy": true, 00:11:25.226 "nvme_iov_md": false 00:11:25.226 }, 00:11:25.226 "memory_domains": [ 00:11:25.226 { 00:11:25.226 "dma_device_id": "system", 00:11:25.226 "dma_device_type": 1 00:11:25.226 }, 00:11:25.226 { 00:11:25.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.226 "dma_device_type": 2 00:11:25.226 } 00:11:25.226 ], 00:11:25.226 "driver_specific": {} 00:11:25.226 } 00:11:25.226 ] 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.226 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.227 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.227 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.227 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.227 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.227 "name": "Existed_Raid", 00:11:25.227 "uuid": "c5ede665-1756-457d-a939-924dc1086f89", 00:11:25.227 "strip_size_kb": 64, 00:11:25.227 "state": "configuring", 00:11:25.227 "raid_level": "raid0", 00:11:25.227 "superblock": true, 00:11:25.227 "num_base_bdevs": 3, 00:11:25.227 "num_base_bdevs_discovered": 2, 00:11:25.227 "num_base_bdevs_operational": 3, 00:11:25.227 "base_bdevs_list": [ 00:11:25.227 { 00:11:25.227 "name": "BaseBdev1", 00:11:25.227 "uuid": "be205954-eee0-4f65-9d62-9e5c45240262", 00:11:25.227 "is_configured": true, 00:11:25.227 "data_offset": 2048, 00:11:25.227 "data_size": 63488 00:11:25.227 }, 00:11:25.227 { 00:11:25.227 "name": "BaseBdev2", 00:11:25.227 "uuid": "fd3083d8-53b2-4da3-a039-ba3f09b80412", 00:11:25.227 "is_configured": true, 00:11:25.227 "data_offset": 2048, 00:11:25.227 "data_size": 63488 00:11:25.227 }, 00:11:25.227 { 00:11:25.227 "name": "BaseBdev3", 00:11:25.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.227 "is_configured": false, 00:11:25.227 "data_offset": 0, 00:11:25.227 "data_size": 0 00:11:25.227 } 00:11:25.227 ] 00:11:25.227 }' 00:11:25.227 08:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.227 08:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.794 [2024-11-27 08:43:22.417017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.794 [2024-11-27 08:43:22.417770] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:25.794 [2024-11-27 08:43:22.417811] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:25.794 BaseBdev3 00:11:25.794 [2024-11-27 08:43:22.418233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:25.794 [2024-11-27 08:43:22.418466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:25.794 [2024-11-27 08:43:22.418491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:25.794 [2024-11-27 08:43:22.418710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.794 [ 00:11:25.794 { 00:11:25.794 "name": "BaseBdev3", 00:11:25.794 "aliases": [ 00:11:25.794 "d30e198f-7192-4ed7-abc7-fc9229e2f5fa" 00:11:25.794 ], 00:11:25.794 "product_name": "Malloc disk", 00:11:25.794 "block_size": 512, 00:11:25.794 "num_blocks": 65536, 00:11:25.794 "uuid": "d30e198f-7192-4ed7-abc7-fc9229e2f5fa", 00:11:25.794 "assigned_rate_limits": { 00:11:25.794 "rw_ios_per_sec": 0, 00:11:25.794 "rw_mbytes_per_sec": 0, 00:11:25.794 "r_mbytes_per_sec": 0, 00:11:25.794 "w_mbytes_per_sec": 0 00:11:25.794 }, 00:11:25.794 "claimed": true, 00:11:25.794 "claim_type": "exclusive_write", 00:11:25.794 "zoned": false, 00:11:25.794 "supported_io_types": { 00:11:25.794 "read": true, 00:11:25.794 "write": true, 00:11:25.794 "unmap": true, 00:11:25.794 "flush": true, 00:11:25.794 "reset": true, 00:11:25.794 "nvme_admin": false, 00:11:25.794 "nvme_io": false, 00:11:25.794 "nvme_io_md": false, 00:11:25.794 "write_zeroes": true, 00:11:25.794 "zcopy": true, 00:11:25.794 "get_zone_info": false, 00:11:25.794 "zone_management": false, 00:11:25.794 "zone_append": false, 00:11:25.794 "compare": false, 00:11:25.794 "compare_and_write": false, 00:11:25.794 "abort": true, 00:11:25.794 "seek_hole": false, 00:11:25.794 "seek_data": false, 00:11:25.794 "copy": true, 00:11:25.794 "nvme_iov_md": false 00:11:25.794 }, 00:11:25.794 "memory_domains": [ 00:11:25.794 { 00:11:25.794 "dma_device_id": "system", 00:11:25.794 "dma_device_type": 1 00:11:25.794 }, 00:11:25.794 { 00:11:25.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.794 "dma_device_type": 2 00:11:25.794 } 00:11:25.794 ], 00:11:25.794 "driver_specific": {} 00:11:25.794 } 00:11:25.794 ] 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.794 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.795 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.795 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.795 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.795 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.795 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.795 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.795 "name": "Existed_Raid", 00:11:25.795 "uuid": "c5ede665-1756-457d-a939-924dc1086f89", 00:11:25.795 "strip_size_kb": 64, 00:11:25.795 "state": "online", 00:11:25.795 "raid_level": "raid0", 00:11:25.795 "superblock": true, 00:11:25.795 "num_base_bdevs": 3, 00:11:25.795 "num_base_bdevs_discovered": 3, 00:11:25.795 "num_base_bdevs_operational": 3, 00:11:25.795 "base_bdevs_list": [ 00:11:25.795 { 00:11:25.795 "name": "BaseBdev1", 00:11:25.795 "uuid": "be205954-eee0-4f65-9d62-9e5c45240262", 00:11:25.795 "is_configured": true, 00:11:25.795 "data_offset": 2048, 00:11:25.795 "data_size": 63488 00:11:25.795 }, 00:11:25.795 { 00:11:25.795 "name": "BaseBdev2", 00:11:25.795 "uuid": "fd3083d8-53b2-4da3-a039-ba3f09b80412", 00:11:25.795 "is_configured": true, 00:11:25.795 "data_offset": 2048, 00:11:25.795 "data_size": 63488 00:11:25.795 }, 00:11:25.795 { 00:11:25.795 "name": "BaseBdev3", 00:11:25.795 "uuid": "d30e198f-7192-4ed7-abc7-fc9229e2f5fa", 00:11:25.795 "is_configured": true, 00:11:25.795 "data_offset": 2048, 00:11:25.795 "data_size": 63488 00:11:25.795 } 00:11:25.795 ] 00:11:25.795 }' 00:11:25.795 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.795 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.361 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:26.361 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:26.361 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.361 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.361 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.361 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.361 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:26.361 08:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.361 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.361 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.361 [2024-11-27 08:43:22.965714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.361 08:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.361 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.361 "name": "Existed_Raid", 00:11:26.361 "aliases": [ 00:11:26.361 "c5ede665-1756-457d-a939-924dc1086f89" 00:11:26.361 ], 00:11:26.361 "product_name": "Raid Volume", 00:11:26.361 "block_size": 512, 00:11:26.361 "num_blocks": 190464, 00:11:26.361 "uuid": "c5ede665-1756-457d-a939-924dc1086f89", 00:11:26.361 "assigned_rate_limits": { 00:11:26.361 "rw_ios_per_sec": 0, 00:11:26.361 "rw_mbytes_per_sec": 0, 00:11:26.361 "r_mbytes_per_sec": 0, 00:11:26.361 "w_mbytes_per_sec": 0 00:11:26.361 }, 00:11:26.361 "claimed": false, 00:11:26.361 "zoned": false, 00:11:26.361 "supported_io_types": { 00:11:26.361 "read": true, 00:11:26.361 "write": true, 00:11:26.361 "unmap": true, 00:11:26.361 "flush": true, 00:11:26.361 "reset": true, 00:11:26.361 "nvme_admin": false, 00:11:26.361 "nvme_io": false, 00:11:26.361 "nvme_io_md": false, 00:11:26.361 "write_zeroes": true, 00:11:26.361 "zcopy": false, 00:11:26.361 "get_zone_info": false, 00:11:26.361 "zone_management": false, 00:11:26.361 "zone_append": false, 00:11:26.361 "compare": false, 00:11:26.361 "compare_and_write": false, 00:11:26.361 "abort": false, 00:11:26.361 "seek_hole": false, 00:11:26.361 "seek_data": false, 00:11:26.361 "copy": false, 00:11:26.361 "nvme_iov_md": false 00:11:26.361 }, 00:11:26.361 "memory_domains": [ 00:11:26.361 { 00:11:26.361 "dma_device_id": "system", 00:11:26.361 "dma_device_type": 1 00:11:26.361 }, 00:11:26.361 { 00:11:26.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.361 "dma_device_type": 2 00:11:26.361 }, 00:11:26.361 { 00:11:26.361 "dma_device_id": "system", 00:11:26.361 "dma_device_type": 1 00:11:26.361 }, 00:11:26.361 { 00:11:26.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.361 "dma_device_type": 2 00:11:26.361 }, 00:11:26.361 { 00:11:26.361 "dma_device_id": "system", 00:11:26.361 "dma_device_type": 1 00:11:26.361 }, 00:11:26.361 { 00:11:26.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.361 "dma_device_type": 2 00:11:26.361 } 00:11:26.361 ], 00:11:26.361 "driver_specific": { 00:11:26.361 "raid": { 00:11:26.361 "uuid": "c5ede665-1756-457d-a939-924dc1086f89", 00:11:26.361 "strip_size_kb": 64, 00:11:26.361 "state": "online", 00:11:26.361 "raid_level": "raid0", 00:11:26.361 "superblock": true, 00:11:26.361 "num_base_bdevs": 3, 00:11:26.361 "num_base_bdevs_discovered": 3, 00:11:26.361 "num_base_bdevs_operational": 3, 00:11:26.361 "base_bdevs_list": [ 00:11:26.361 { 00:11:26.361 "name": "BaseBdev1", 00:11:26.361 "uuid": "be205954-eee0-4f65-9d62-9e5c45240262", 00:11:26.361 "is_configured": true, 00:11:26.361 "data_offset": 2048, 00:11:26.361 "data_size": 63488 00:11:26.361 }, 00:11:26.361 { 00:11:26.361 "name": "BaseBdev2", 00:11:26.361 "uuid": "fd3083d8-53b2-4da3-a039-ba3f09b80412", 00:11:26.361 "is_configured": true, 00:11:26.361 "data_offset": 2048, 00:11:26.361 "data_size": 63488 00:11:26.361 }, 00:11:26.361 { 00:11:26.361 "name": "BaseBdev3", 00:11:26.361 "uuid": "d30e198f-7192-4ed7-abc7-fc9229e2f5fa", 00:11:26.361 "is_configured": true, 00:11:26.362 "data_offset": 2048, 00:11:26.362 "data_size": 63488 00:11:26.362 } 00:11:26.362 ] 00:11:26.362 } 00:11:26.362 } 00:11:26.362 }' 00:11:26.362 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.362 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:26.362 BaseBdev2 00:11:26.362 BaseBdev3' 00:11:26.362 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.362 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.362 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.362 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:26.362 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.362 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.362 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.620 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.620 [2024-11-27 08:43:23.285466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.620 [2024-11-27 08:43:23.285515] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.620 [2024-11-27 08:43:23.285602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.880 "name": "Existed_Raid", 00:11:26.880 "uuid": "c5ede665-1756-457d-a939-924dc1086f89", 00:11:26.880 "strip_size_kb": 64, 00:11:26.880 "state": "offline", 00:11:26.880 "raid_level": "raid0", 00:11:26.880 "superblock": true, 00:11:26.880 "num_base_bdevs": 3, 00:11:26.880 "num_base_bdevs_discovered": 2, 00:11:26.880 "num_base_bdevs_operational": 2, 00:11:26.880 "base_bdevs_list": [ 00:11:26.880 { 00:11:26.880 "name": null, 00:11:26.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.880 "is_configured": false, 00:11:26.880 "data_offset": 0, 00:11:26.880 "data_size": 63488 00:11:26.880 }, 00:11:26.880 { 00:11:26.880 "name": "BaseBdev2", 00:11:26.880 "uuid": "fd3083d8-53b2-4da3-a039-ba3f09b80412", 00:11:26.880 "is_configured": true, 00:11:26.880 "data_offset": 2048, 00:11:26.880 "data_size": 63488 00:11:26.880 }, 00:11:26.880 { 00:11:26.880 "name": "BaseBdev3", 00:11:26.880 "uuid": "d30e198f-7192-4ed7-abc7-fc9229e2f5fa", 00:11:26.880 "is_configured": true, 00:11:26.880 "data_offset": 2048, 00:11:26.880 "data_size": 63488 00:11:26.880 } 00:11:26.880 ] 00:11:26.880 }' 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.880 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.447 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:27.447 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.447 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.447 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.447 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.447 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.447 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.447 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.447 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.447 08:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:27.447 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.447 08:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.447 [2024-11-27 08:43:23.973066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:27.447 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.447 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.447 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.447 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.447 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.447 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.447 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.447 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.448 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.448 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.448 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:27.448 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.448 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.448 [2024-11-27 08:43:24.127495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.448 [2024-11-27 08:43:24.127572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.707 BaseBdev2 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.707 [ 00:11:27.707 { 00:11:27.707 "name": "BaseBdev2", 00:11:27.707 "aliases": [ 00:11:27.707 "153bc6cd-1049-4e89-919a-fbc68390be3a" 00:11:27.707 ], 00:11:27.707 "product_name": "Malloc disk", 00:11:27.707 "block_size": 512, 00:11:27.707 "num_blocks": 65536, 00:11:27.707 "uuid": "153bc6cd-1049-4e89-919a-fbc68390be3a", 00:11:27.707 "assigned_rate_limits": { 00:11:27.707 "rw_ios_per_sec": 0, 00:11:27.707 "rw_mbytes_per_sec": 0, 00:11:27.707 "r_mbytes_per_sec": 0, 00:11:27.707 "w_mbytes_per_sec": 0 00:11:27.707 }, 00:11:27.707 "claimed": false, 00:11:27.707 "zoned": false, 00:11:27.707 "supported_io_types": { 00:11:27.707 "read": true, 00:11:27.707 "write": true, 00:11:27.707 "unmap": true, 00:11:27.707 "flush": true, 00:11:27.707 "reset": true, 00:11:27.707 "nvme_admin": false, 00:11:27.707 "nvme_io": false, 00:11:27.707 "nvme_io_md": false, 00:11:27.707 "write_zeroes": true, 00:11:27.707 "zcopy": true, 00:11:27.707 "get_zone_info": false, 00:11:27.707 "zone_management": false, 00:11:27.707 "zone_append": false, 00:11:27.707 "compare": false, 00:11:27.707 "compare_and_write": false, 00:11:27.707 "abort": true, 00:11:27.707 "seek_hole": false, 00:11:27.707 "seek_data": false, 00:11:27.707 "copy": true, 00:11:27.707 "nvme_iov_md": false 00:11:27.707 }, 00:11:27.707 "memory_domains": [ 00:11:27.707 { 00:11:27.707 "dma_device_id": "system", 00:11:27.707 "dma_device_type": 1 00:11:27.707 }, 00:11:27.707 { 00:11:27.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.707 "dma_device_type": 2 00:11:27.707 } 00:11:27.707 ], 00:11:27.707 "driver_specific": {} 00:11:27.707 } 00:11:27.707 ] 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.707 BaseBdev3 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.707 [ 00:11:27.707 { 00:11:27.707 "name": "BaseBdev3", 00:11:27.707 "aliases": [ 00:11:27.707 "814176ab-5a3f-4f93-bf07-f3570aa17681" 00:11:27.707 ], 00:11:27.707 "product_name": "Malloc disk", 00:11:27.707 "block_size": 512, 00:11:27.707 "num_blocks": 65536, 00:11:27.707 "uuid": "814176ab-5a3f-4f93-bf07-f3570aa17681", 00:11:27.707 "assigned_rate_limits": { 00:11:27.707 "rw_ios_per_sec": 0, 00:11:27.707 "rw_mbytes_per_sec": 0, 00:11:27.707 "r_mbytes_per_sec": 0, 00:11:27.707 "w_mbytes_per_sec": 0 00:11:27.707 }, 00:11:27.707 "claimed": false, 00:11:27.707 "zoned": false, 00:11:27.707 "supported_io_types": { 00:11:27.707 "read": true, 00:11:27.707 "write": true, 00:11:27.707 "unmap": true, 00:11:27.707 "flush": true, 00:11:27.707 "reset": true, 00:11:27.707 "nvme_admin": false, 00:11:27.707 "nvme_io": false, 00:11:27.707 "nvme_io_md": false, 00:11:27.707 "write_zeroes": true, 00:11:27.707 "zcopy": true, 00:11:27.707 "get_zone_info": false, 00:11:27.707 "zone_management": false, 00:11:27.707 "zone_append": false, 00:11:27.707 "compare": false, 00:11:27.707 "compare_and_write": false, 00:11:27.707 "abort": true, 00:11:27.707 "seek_hole": false, 00:11:27.707 "seek_data": false, 00:11:27.707 "copy": true, 00:11:27.707 "nvme_iov_md": false 00:11:27.707 }, 00:11:27.707 "memory_domains": [ 00:11:27.707 { 00:11:27.707 "dma_device_id": "system", 00:11:27.707 "dma_device_type": 1 00:11:27.707 }, 00:11:27.707 { 00:11:27.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.707 "dma_device_type": 2 00:11:27.707 } 00:11:27.707 ], 00:11:27.707 "driver_specific": {} 00:11:27.707 } 00:11:27.707 ] 00:11:27.707 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.708 [2024-11-27 08:43:24.439827] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.708 [2024-11-27 08:43:24.439899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.708 [2024-11-27 08:43:24.439931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.708 [2024-11-27 08:43:24.442481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.708 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.029 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.029 "name": "Existed_Raid", 00:11:28.029 "uuid": "99ca2bce-428e-490c-9245-a014ba239348", 00:11:28.029 "strip_size_kb": 64, 00:11:28.029 "state": "configuring", 00:11:28.029 "raid_level": "raid0", 00:11:28.029 "superblock": true, 00:11:28.029 "num_base_bdevs": 3, 00:11:28.029 "num_base_bdevs_discovered": 2, 00:11:28.029 "num_base_bdevs_operational": 3, 00:11:28.029 "base_bdevs_list": [ 00:11:28.029 { 00:11:28.029 "name": "BaseBdev1", 00:11:28.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.029 "is_configured": false, 00:11:28.029 "data_offset": 0, 00:11:28.029 "data_size": 0 00:11:28.029 }, 00:11:28.029 { 00:11:28.029 "name": "BaseBdev2", 00:11:28.029 "uuid": "153bc6cd-1049-4e89-919a-fbc68390be3a", 00:11:28.029 "is_configured": true, 00:11:28.029 "data_offset": 2048, 00:11:28.029 "data_size": 63488 00:11:28.029 }, 00:11:28.029 { 00:11:28.029 "name": "BaseBdev3", 00:11:28.029 "uuid": "814176ab-5a3f-4f93-bf07-f3570aa17681", 00:11:28.029 "is_configured": true, 00:11:28.029 "data_offset": 2048, 00:11:28.029 "data_size": 63488 00:11:28.029 } 00:11:28.029 ] 00:11:28.029 }' 00:11:28.029 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.029 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.303 [2024-11-27 08:43:24.952074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.303 08:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.303 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.303 "name": "Existed_Raid", 00:11:28.303 "uuid": "99ca2bce-428e-490c-9245-a014ba239348", 00:11:28.303 "strip_size_kb": 64, 00:11:28.303 "state": "configuring", 00:11:28.303 "raid_level": "raid0", 00:11:28.303 "superblock": true, 00:11:28.303 "num_base_bdevs": 3, 00:11:28.303 "num_base_bdevs_discovered": 1, 00:11:28.303 "num_base_bdevs_operational": 3, 00:11:28.303 "base_bdevs_list": [ 00:11:28.303 { 00:11:28.303 "name": "BaseBdev1", 00:11:28.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.303 "is_configured": false, 00:11:28.303 "data_offset": 0, 00:11:28.303 "data_size": 0 00:11:28.303 }, 00:11:28.303 { 00:11:28.303 "name": null, 00:11:28.303 "uuid": "153bc6cd-1049-4e89-919a-fbc68390be3a", 00:11:28.303 "is_configured": false, 00:11:28.303 "data_offset": 0, 00:11:28.303 "data_size": 63488 00:11:28.303 }, 00:11:28.303 { 00:11:28.304 "name": "BaseBdev3", 00:11:28.304 "uuid": "814176ab-5a3f-4f93-bf07-f3570aa17681", 00:11:28.304 "is_configured": true, 00:11:28.304 "data_offset": 2048, 00:11:28.304 "data_size": 63488 00:11:28.304 } 00:11:28.304 ] 00:11:28.304 }' 00:11:28.304 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.304 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.871 [2024-11-27 08:43:25.587058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.871 BaseBdev1 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.871 [ 00:11:28.871 { 00:11:28.871 "name": "BaseBdev1", 00:11:28.871 "aliases": [ 00:11:28.871 "9daba4a6-1dbd-458a-b112-e9a5cf3e7ed2" 00:11:28.871 ], 00:11:28.871 "product_name": "Malloc disk", 00:11:28.871 "block_size": 512, 00:11:28.871 "num_blocks": 65536, 00:11:28.871 "uuid": "9daba4a6-1dbd-458a-b112-e9a5cf3e7ed2", 00:11:28.871 "assigned_rate_limits": { 00:11:28.871 "rw_ios_per_sec": 0, 00:11:28.871 "rw_mbytes_per_sec": 0, 00:11:28.871 "r_mbytes_per_sec": 0, 00:11:28.871 "w_mbytes_per_sec": 0 00:11:28.871 }, 00:11:28.871 "claimed": true, 00:11:28.871 "claim_type": "exclusive_write", 00:11:28.871 "zoned": false, 00:11:28.871 "supported_io_types": { 00:11:28.871 "read": true, 00:11:28.871 "write": true, 00:11:28.871 "unmap": true, 00:11:28.871 "flush": true, 00:11:28.871 "reset": true, 00:11:28.871 "nvme_admin": false, 00:11:28.871 "nvme_io": false, 00:11:28.871 "nvme_io_md": false, 00:11:28.871 "write_zeroes": true, 00:11:28.871 "zcopy": true, 00:11:28.871 "get_zone_info": false, 00:11:28.871 "zone_management": false, 00:11:28.871 "zone_append": false, 00:11:28.871 "compare": false, 00:11:28.871 "compare_and_write": false, 00:11:28.871 "abort": true, 00:11:28.871 "seek_hole": false, 00:11:28.871 "seek_data": false, 00:11:28.871 "copy": true, 00:11:28.871 "nvme_iov_md": false 00:11:28.871 }, 00:11:28.871 "memory_domains": [ 00:11:28.871 { 00:11:28.871 "dma_device_id": "system", 00:11:28.871 "dma_device_type": 1 00:11:28.871 }, 00:11:28.871 { 00:11:28.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.871 "dma_device_type": 2 00:11:28.871 } 00:11:28.871 ], 00:11:28.871 "driver_specific": {} 00:11:28.871 } 00:11:28.871 ] 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:28.871 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.872 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.872 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.872 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.872 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.872 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.872 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.872 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.872 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.872 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.872 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.872 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.872 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.130 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.130 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.130 "name": "Existed_Raid", 00:11:29.130 "uuid": "99ca2bce-428e-490c-9245-a014ba239348", 00:11:29.130 "strip_size_kb": 64, 00:11:29.130 "state": "configuring", 00:11:29.130 "raid_level": "raid0", 00:11:29.130 "superblock": true, 00:11:29.130 "num_base_bdevs": 3, 00:11:29.130 "num_base_bdevs_discovered": 2, 00:11:29.130 "num_base_bdevs_operational": 3, 00:11:29.130 "base_bdevs_list": [ 00:11:29.130 { 00:11:29.130 "name": "BaseBdev1", 00:11:29.130 "uuid": "9daba4a6-1dbd-458a-b112-e9a5cf3e7ed2", 00:11:29.130 "is_configured": true, 00:11:29.130 "data_offset": 2048, 00:11:29.130 "data_size": 63488 00:11:29.130 }, 00:11:29.130 { 00:11:29.130 "name": null, 00:11:29.130 "uuid": "153bc6cd-1049-4e89-919a-fbc68390be3a", 00:11:29.130 "is_configured": false, 00:11:29.130 "data_offset": 0, 00:11:29.130 "data_size": 63488 00:11:29.130 }, 00:11:29.130 { 00:11:29.130 "name": "BaseBdev3", 00:11:29.130 "uuid": "814176ab-5a3f-4f93-bf07-f3570aa17681", 00:11:29.130 "is_configured": true, 00:11:29.131 "data_offset": 2048, 00:11:29.131 "data_size": 63488 00:11:29.131 } 00:11:29.131 ] 00:11:29.131 }' 00:11:29.131 08:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.131 08:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.389 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.389 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.389 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.389 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.389 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.647 [2024-11-27 08:43:26.179324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.647 "name": "Existed_Raid", 00:11:29.647 "uuid": "99ca2bce-428e-490c-9245-a014ba239348", 00:11:29.647 "strip_size_kb": 64, 00:11:29.647 "state": "configuring", 00:11:29.647 "raid_level": "raid0", 00:11:29.647 "superblock": true, 00:11:29.647 "num_base_bdevs": 3, 00:11:29.647 "num_base_bdevs_discovered": 1, 00:11:29.647 "num_base_bdevs_operational": 3, 00:11:29.647 "base_bdevs_list": [ 00:11:29.647 { 00:11:29.647 "name": "BaseBdev1", 00:11:29.647 "uuid": "9daba4a6-1dbd-458a-b112-e9a5cf3e7ed2", 00:11:29.647 "is_configured": true, 00:11:29.647 "data_offset": 2048, 00:11:29.647 "data_size": 63488 00:11:29.647 }, 00:11:29.647 { 00:11:29.647 "name": null, 00:11:29.647 "uuid": "153bc6cd-1049-4e89-919a-fbc68390be3a", 00:11:29.647 "is_configured": false, 00:11:29.647 "data_offset": 0, 00:11:29.647 "data_size": 63488 00:11:29.647 }, 00:11:29.647 { 00:11:29.647 "name": null, 00:11:29.647 "uuid": "814176ab-5a3f-4f93-bf07-f3570aa17681", 00:11:29.647 "is_configured": false, 00:11:29.647 "data_offset": 0, 00:11:29.647 "data_size": 63488 00:11:29.647 } 00:11:29.647 ] 00:11:29.647 }' 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.647 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.215 [2024-11-27 08:43:26.775493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.215 "name": "Existed_Raid", 00:11:30.215 "uuid": "99ca2bce-428e-490c-9245-a014ba239348", 00:11:30.215 "strip_size_kb": 64, 00:11:30.215 "state": "configuring", 00:11:30.215 "raid_level": "raid0", 00:11:30.215 "superblock": true, 00:11:30.215 "num_base_bdevs": 3, 00:11:30.215 "num_base_bdevs_discovered": 2, 00:11:30.215 "num_base_bdevs_operational": 3, 00:11:30.215 "base_bdevs_list": [ 00:11:30.215 { 00:11:30.215 "name": "BaseBdev1", 00:11:30.215 "uuid": "9daba4a6-1dbd-458a-b112-e9a5cf3e7ed2", 00:11:30.215 "is_configured": true, 00:11:30.215 "data_offset": 2048, 00:11:30.215 "data_size": 63488 00:11:30.215 }, 00:11:30.215 { 00:11:30.215 "name": null, 00:11:30.215 "uuid": "153bc6cd-1049-4e89-919a-fbc68390be3a", 00:11:30.215 "is_configured": false, 00:11:30.215 "data_offset": 0, 00:11:30.215 "data_size": 63488 00:11:30.215 }, 00:11:30.215 { 00:11:30.215 "name": "BaseBdev3", 00:11:30.215 "uuid": "814176ab-5a3f-4f93-bf07-f3570aa17681", 00:11:30.215 "is_configured": true, 00:11:30.215 "data_offset": 2048, 00:11:30.215 "data_size": 63488 00:11:30.215 } 00:11:30.215 ] 00:11:30.215 }' 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.215 08:43:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.783 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.783 08:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.783 08:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.783 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.783 08:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.783 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:30.783 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.783 08:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.783 08:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.783 [2024-11-27 08:43:27.359743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.784 "name": "Existed_Raid", 00:11:30.784 "uuid": "99ca2bce-428e-490c-9245-a014ba239348", 00:11:30.784 "strip_size_kb": 64, 00:11:30.784 "state": "configuring", 00:11:30.784 "raid_level": "raid0", 00:11:30.784 "superblock": true, 00:11:30.784 "num_base_bdevs": 3, 00:11:30.784 "num_base_bdevs_discovered": 1, 00:11:30.784 "num_base_bdevs_operational": 3, 00:11:30.784 "base_bdevs_list": [ 00:11:30.784 { 00:11:30.784 "name": null, 00:11:30.784 "uuid": "9daba4a6-1dbd-458a-b112-e9a5cf3e7ed2", 00:11:30.784 "is_configured": false, 00:11:30.784 "data_offset": 0, 00:11:30.784 "data_size": 63488 00:11:30.784 }, 00:11:30.784 { 00:11:30.784 "name": null, 00:11:30.784 "uuid": "153bc6cd-1049-4e89-919a-fbc68390be3a", 00:11:30.784 "is_configured": false, 00:11:30.784 "data_offset": 0, 00:11:30.784 "data_size": 63488 00:11:30.784 }, 00:11:30.784 { 00:11:30.784 "name": "BaseBdev3", 00:11:30.784 "uuid": "814176ab-5a3f-4f93-bf07-f3570aa17681", 00:11:30.784 "is_configured": true, 00:11:30.784 "data_offset": 2048, 00:11:30.784 "data_size": 63488 00:11:30.784 } 00:11:30.784 ] 00:11:30.784 }' 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.784 08:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.352 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.352 08:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:31.352 08:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.352 08:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.352 08:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.352 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:31.352 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:31.352 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.352 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.352 [2024-11-27 08:43:28.030811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.352 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.352 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:31.352 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.352 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.352 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.352 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.353 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.353 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.353 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.353 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.353 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.353 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.353 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.353 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.353 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.353 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.353 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.353 "name": "Existed_Raid", 00:11:31.353 "uuid": "99ca2bce-428e-490c-9245-a014ba239348", 00:11:31.353 "strip_size_kb": 64, 00:11:31.353 "state": "configuring", 00:11:31.353 "raid_level": "raid0", 00:11:31.353 "superblock": true, 00:11:31.353 "num_base_bdevs": 3, 00:11:31.353 "num_base_bdevs_discovered": 2, 00:11:31.353 "num_base_bdevs_operational": 3, 00:11:31.353 "base_bdevs_list": [ 00:11:31.353 { 00:11:31.353 "name": null, 00:11:31.353 "uuid": "9daba4a6-1dbd-458a-b112-e9a5cf3e7ed2", 00:11:31.353 "is_configured": false, 00:11:31.353 "data_offset": 0, 00:11:31.353 "data_size": 63488 00:11:31.353 }, 00:11:31.353 { 00:11:31.353 "name": "BaseBdev2", 00:11:31.353 "uuid": "153bc6cd-1049-4e89-919a-fbc68390be3a", 00:11:31.353 "is_configured": true, 00:11:31.353 "data_offset": 2048, 00:11:31.353 "data_size": 63488 00:11:31.353 }, 00:11:31.353 { 00:11:31.353 "name": "BaseBdev3", 00:11:31.353 "uuid": "814176ab-5a3f-4f93-bf07-f3570aa17681", 00:11:31.353 "is_configured": true, 00:11:31.353 "data_offset": 2048, 00:11:31.353 "data_size": 63488 00:11:31.353 } 00:11:31.353 ] 00:11:31.353 }' 00:11:31.353 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.353 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.920 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.920 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:31.920 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.920 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.920 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.920 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:31.920 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:31.920 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.920 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.920 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.920 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.920 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9daba4a6-1dbd-458a-b112-e9a5cf3e7ed2 00:11:31.920 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.920 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.178 [2024-11-27 08:43:28.693270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:32.178 [2024-11-27 08:43:28.693598] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:32.178 [2024-11-27 08:43:28.693630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:32.178 [2024-11-27 08:43:28.693967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:32.178 NewBaseBdev 00:11:32.178 [2024-11-27 08:43:28.694168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:32.178 [2024-11-27 08:43:28.694192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:32.178 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.178 [2024-11-27 08:43:28.694392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.178 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:32.178 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:11:32.178 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:32.178 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:11:32.178 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:32.178 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:32.178 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:32.178 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.178 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.178 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.178 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:32.178 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.178 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.178 [ 00:11:32.178 { 00:11:32.178 "name": "NewBaseBdev", 00:11:32.178 "aliases": [ 00:11:32.178 "9daba4a6-1dbd-458a-b112-e9a5cf3e7ed2" 00:11:32.178 ], 00:11:32.178 "product_name": "Malloc disk", 00:11:32.178 "block_size": 512, 00:11:32.178 "num_blocks": 65536, 00:11:32.178 "uuid": "9daba4a6-1dbd-458a-b112-e9a5cf3e7ed2", 00:11:32.178 "assigned_rate_limits": { 00:11:32.178 "rw_ios_per_sec": 0, 00:11:32.178 "rw_mbytes_per_sec": 0, 00:11:32.179 "r_mbytes_per_sec": 0, 00:11:32.179 "w_mbytes_per_sec": 0 00:11:32.179 }, 00:11:32.179 "claimed": true, 00:11:32.179 "claim_type": "exclusive_write", 00:11:32.179 "zoned": false, 00:11:32.179 "supported_io_types": { 00:11:32.179 "read": true, 00:11:32.179 "write": true, 00:11:32.179 "unmap": true, 00:11:32.179 "flush": true, 00:11:32.179 "reset": true, 00:11:32.179 "nvme_admin": false, 00:11:32.179 "nvme_io": false, 00:11:32.179 "nvme_io_md": false, 00:11:32.179 "write_zeroes": true, 00:11:32.179 "zcopy": true, 00:11:32.179 "get_zone_info": false, 00:11:32.179 "zone_management": false, 00:11:32.179 "zone_append": false, 00:11:32.179 "compare": false, 00:11:32.179 "compare_and_write": false, 00:11:32.179 "abort": true, 00:11:32.179 "seek_hole": false, 00:11:32.179 "seek_data": false, 00:11:32.179 "copy": true, 00:11:32.179 "nvme_iov_md": false 00:11:32.179 }, 00:11:32.179 "memory_domains": [ 00:11:32.179 { 00:11:32.179 "dma_device_id": "system", 00:11:32.179 "dma_device_type": 1 00:11:32.179 }, 00:11:32.179 { 00:11:32.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.179 "dma_device_type": 2 00:11:32.179 } 00:11:32.179 ], 00:11:32.179 "driver_specific": {} 00:11:32.179 } 00:11:32.179 ] 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.179 "name": "Existed_Raid", 00:11:32.179 "uuid": "99ca2bce-428e-490c-9245-a014ba239348", 00:11:32.179 "strip_size_kb": 64, 00:11:32.179 "state": "online", 00:11:32.179 "raid_level": "raid0", 00:11:32.179 "superblock": true, 00:11:32.179 "num_base_bdevs": 3, 00:11:32.179 "num_base_bdevs_discovered": 3, 00:11:32.179 "num_base_bdevs_operational": 3, 00:11:32.179 "base_bdevs_list": [ 00:11:32.179 { 00:11:32.179 "name": "NewBaseBdev", 00:11:32.179 "uuid": "9daba4a6-1dbd-458a-b112-e9a5cf3e7ed2", 00:11:32.179 "is_configured": true, 00:11:32.179 "data_offset": 2048, 00:11:32.179 "data_size": 63488 00:11:32.179 }, 00:11:32.179 { 00:11:32.179 "name": "BaseBdev2", 00:11:32.179 "uuid": "153bc6cd-1049-4e89-919a-fbc68390be3a", 00:11:32.179 "is_configured": true, 00:11:32.179 "data_offset": 2048, 00:11:32.179 "data_size": 63488 00:11:32.179 }, 00:11:32.179 { 00:11:32.179 "name": "BaseBdev3", 00:11:32.179 "uuid": "814176ab-5a3f-4f93-bf07-f3570aa17681", 00:11:32.179 "is_configured": true, 00:11:32.179 "data_offset": 2048, 00:11:32.179 "data_size": 63488 00:11:32.179 } 00:11:32.179 ] 00:11:32.179 }' 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.179 08:43:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.771 [2024-11-27 08:43:29.249896] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:32.771 "name": "Existed_Raid", 00:11:32.771 "aliases": [ 00:11:32.771 "99ca2bce-428e-490c-9245-a014ba239348" 00:11:32.771 ], 00:11:32.771 "product_name": "Raid Volume", 00:11:32.771 "block_size": 512, 00:11:32.771 "num_blocks": 190464, 00:11:32.771 "uuid": "99ca2bce-428e-490c-9245-a014ba239348", 00:11:32.771 "assigned_rate_limits": { 00:11:32.771 "rw_ios_per_sec": 0, 00:11:32.771 "rw_mbytes_per_sec": 0, 00:11:32.771 "r_mbytes_per_sec": 0, 00:11:32.771 "w_mbytes_per_sec": 0 00:11:32.771 }, 00:11:32.771 "claimed": false, 00:11:32.771 "zoned": false, 00:11:32.771 "supported_io_types": { 00:11:32.771 "read": true, 00:11:32.771 "write": true, 00:11:32.771 "unmap": true, 00:11:32.771 "flush": true, 00:11:32.771 "reset": true, 00:11:32.771 "nvme_admin": false, 00:11:32.771 "nvme_io": false, 00:11:32.771 "nvme_io_md": false, 00:11:32.771 "write_zeroes": true, 00:11:32.771 "zcopy": false, 00:11:32.771 "get_zone_info": false, 00:11:32.771 "zone_management": false, 00:11:32.771 "zone_append": false, 00:11:32.771 "compare": false, 00:11:32.771 "compare_and_write": false, 00:11:32.771 "abort": false, 00:11:32.771 "seek_hole": false, 00:11:32.771 "seek_data": false, 00:11:32.771 "copy": false, 00:11:32.771 "nvme_iov_md": false 00:11:32.771 }, 00:11:32.771 "memory_domains": [ 00:11:32.771 { 00:11:32.771 "dma_device_id": "system", 00:11:32.771 "dma_device_type": 1 00:11:32.771 }, 00:11:32.771 { 00:11:32.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.771 "dma_device_type": 2 00:11:32.771 }, 00:11:32.771 { 00:11:32.771 "dma_device_id": "system", 00:11:32.771 "dma_device_type": 1 00:11:32.771 }, 00:11:32.771 { 00:11:32.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.771 "dma_device_type": 2 00:11:32.771 }, 00:11:32.771 { 00:11:32.771 "dma_device_id": "system", 00:11:32.771 "dma_device_type": 1 00:11:32.771 }, 00:11:32.771 { 00:11:32.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.771 "dma_device_type": 2 00:11:32.771 } 00:11:32.771 ], 00:11:32.771 "driver_specific": { 00:11:32.771 "raid": { 00:11:32.771 "uuid": "99ca2bce-428e-490c-9245-a014ba239348", 00:11:32.771 "strip_size_kb": 64, 00:11:32.771 "state": "online", 00:11:32.771 "raid_level": "raid0", 00:11:32.771 "superblock": true, 00:11:32.771 "num_base_bdevs": 3, 00:11:32.771 "num_base_bdevs_discovered": 3, 00:11:32.771 "num_base_bdevs_operational": 3, 00:11:32.771 "base_bdevs_list": [ 00:11:32.771 { 00:11:32.771 "name": "NewBaseBdev", 00:11:32.771 "uuid": "9daba4a6-1dbd-458a-b112-e9a5cf3e7ed2", 00:11:32.771 "is_configured": true, 00:11:32.771 "data_offset": 2048, 00:11:32.771 "data_size": 63488 00:11:32.771 }, 00:11:32.771 { 00:11:32.771 "name": "BaseBdev2", 00:11:32.771 "uuid": "153bc6cd-1049-4e89-919a-fbc68390be3a", 00:11:32.771 "is_configured": true, 00:11:32.771 "data_offset": 2048, 00:11:32.771 "data_size": 63488 00:11:32.771 }, 00:11:32.771 { 00:11:32.771 "name": "BaseBdev3", 00:11:32.771 "uuid": "814176ab-5a3f-4f93-bf07-f3570aa17681", 00:11:32.771 "is_configured": true, 00:11:32.771 "data_offset": 2048, 00:11:32.771 "data_size": 63488 00:11:32.771 } 00:11:32.771 ] 00:11:32.771 } 00:11:32.771 } 00:11:32.771 }' 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:32.771 BaseBdev2 00:11:32.771 BaseBdev3' 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.771 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.029 [2024-11-27 08:43:29.577564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.029 [2024-11-27 08:43:29.577616] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.029 [2024-11-27 08:43:29.577715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.029 [2024-11-27 08:43:29.577791] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.029 [2024-11-27 08:43:29.577812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64562 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' -z 64562 ']' 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # kill -0 64562 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # uname 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 64562 00:11:33.029 killing process with pid 64562 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # echo 'killing process with pid 64562' 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # kill 64562 00:11:33.029 [2024-11-27 08:43:29.619328] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.029 08:43:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@975 -- # wait 64562 00:11:33.326 [2024-11-27 08:43:29.887875] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.261 08:43:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:34.261 00:11:34.261 real 0m12.011s 00:11:34.261 user 0m19.773s 00:11:34.261 sys 0m1.799s 00:11:34.261 ************************************ 00:11:34.261 END TEST raid_state_function_test_sb 00:11:34.261 ************************************ 00:11:34.261 08:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # xtrace_disable 00:11:34.261 08:43:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.261 08:43:30 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:11:34.261 08:43:30 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:11:34.261 08:43:30 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:11:34.261 08:43:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.261 ************************************ 00:11:34.261 START TEST raid_superblock_test 00:11:34.261 ************************************ 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # raid_superblock_test raid0 3 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65199 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65199 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # '[' -z 65199 ']' 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:11:34.261 08:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.520 [2024-11-27 08:43:31.120213] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:11:34.520 [2024-11-27 08:43:31.120430] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65199 ] 00:11:34.778 [2024-11-27 08:43:31.309429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.778 [2024-11-27 08:43:31.474558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.036 [2024-11-27 08:43:31.701234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.036 [2024-11-27 08:43:31.701278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@865 -- # return 0 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.603 malloc1 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.603 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.603 [2024-11-27 08:43:32.159617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:35.603 [2024-11-27 08:43:32.159712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.603 [2024-11-27 08:43:32.159750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:35.603 [2024-11-27 08:43:32.159766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.604 [2024-11-27 08:43:32.162728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.604 [2024-11-27 08:43:32.162919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:35.604 pt1 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.604 malloc2 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.604 [2024-11-27 08:43:32.219483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.604 [2024-11-27 08:43:32.219576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.604 [2024-11-27 08:43:32.219610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:35.604 [2024-11-27 08:43:32.219625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.604 [2024-11-27 08:43:32.222596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.604 [2024-11-27 08:43:32.222802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:35.604 pt2 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.604 malloc3 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.604 [2024-11-27 08:43:32.287128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:35.604 [2024-11-27 08:43:32.287221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.604 [2024-11-27 08:43:32.287257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:35.604 [2024-11-27 08:43:32.287274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.604 [2024-11-27 08:43:32.290289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.604 [2024-11-27 08:43:32.290354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:35.604 pt3 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.604 [2024-11-27 08:43:32.299267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:35.604 [2024-11-27 08:43:32.301938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:35.604 [2024-11-27 08:43:32.302201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:35.604 [2024-11-27 08:43:32.302478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:35.604 [2024-11-27 08:43:32.302503] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:35.604 [2024-11-27 08:43:32.302893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:35.604 [2024-11-27 08:43:32.303132] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:35.604 [2024-11-27 08:43:32.303149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:35.604 [2024-11-27 08:43:32.303456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.604 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.862 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.862 "name": "raid_bdev1", 00:11:35.862 "uuid": "4b609d2d-8b5a-465d-879a-077abe9f8a93", 00:11:35.862 "strip_size_kb": 64, 00:11:35.862 "state": "online", 00:11:35.862 "raid_level": "raid0", 00:11:35.862 "superblock": true, 00:11:35.862 "num_base_bdevs": 3, 00:11:35.862 "num_base_bdevs_discovered": 3, 00:11:35.862 "num_base_bdevs_operational": 3, 00:11:35.862 "base_bdevs_list": [ 00:11:35.862 { 00:11:35.862 "name": "pt1", 00:11:35.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.862 "is_configured": true, 00:11:35.862 "data_offset": 2048, 00:11:35.862 "data_size": 63488 00:11:35.862 }, 00:11:35.862 { 00:11:35.862 "name": "pt2", 00:11:35.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.862 "is_configured": true, 00:11:35.862 "data_offset": 2048, 00:11:35.862 "data_size": 63488 00:11:35.862 }, 00:11:35.862 { 00:11:35.862 "name": "pt3", 00:11:35.862 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.862 "is_configured": true, 00:11:35.862 "data_offset": 2048, 00:11:35.862 "data_size": 63488 00:11:35.862 } 00:11:35.862 ] 00:11:35.862 }' 00:11:35.862 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.862 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.120 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:36.120 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:36.120 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.120 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.120 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.120 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.120 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.120 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.120 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.120 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.120 [2024-11-27 08:43:32.839986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.120 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.379 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.379 "name": "raid_bdev1", 00:11:36.379 "aliases": [ 00:11:36.379 "4b609d2d-8b5a-465d-879a-077abe9f8a93" 00:11:36.379 ], 00:11:36.379 "product_name": "Raid Volume", 00:11:36.379 "block_size": 512, 00:11:36.379 "num_blocks": 190464, 00:11:36.379 "uuid": "4b609d2d-8b5a-465d-879a-077abe9f8a93", 00:11:36.379 "assigned_rate_limits": { 00:11:36.379 "rw_ios_per_sec": 0, 00:11:36.379 "rw_mbytes_per_sec": 0, 00:11:36.379 "r_mbytes_per_sec": 0, 00:11:36.379 "w_mbytes_per_sec": 0 00:11:36.379 }, 00:11:36.379 "claimed": false, 00:11:36.379 "zoned": false, 00:11:36.379 "supported_io_types": { 00:11:36.379 "read": true, 00:11:36.379 "write": true, 00:11:36.379 "unmap": true, 00:11:36.379 "flush": true, 00:11:36.379 "reset": true, 00:11:36.379 "nvme_admin": false, 00:11:36.379 "nvme_io": false, 00:11:36.379 "nvme_io_md": false, 00:11:36.379 "write_zeroes": true, 00:11:36.379 "zcopy": false, 00:11:36.379 "get_zone_info": false, 00:11:36.379 "zone_management": false, 00:11:36.379 "zone_append": false, 00:11:36.379 "compare": false, 00:11:36.379 "compare_and_write": false, 00:11:36.379 "abort": false, 00:11:36.379 "seek_hole": false, 00:11:36.379 "seek_data": false, 00:11:36.379 "copy": false, 00:11:36.379 "nvme_iov_md": false 00:11:36.379 }, 00:11:36.379 "memory_domains": [ 00:11:36.379 { 00:11:36.379 "dma_device_id": "system", 00:11:36.379 "dma_device_type": 1 00:11:36.379 }, 00:11:36.379 { 00:11:36.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.379 "dma_device_type": 2 00:11:36.379 }, 00:11:36.379 { 00:11:36.379 "dma_device_id": "system", 00:11:36.379 "dma_device_type": 1 00:11:36.379 }, 00:11:36.379 { 00:11:36.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.379 "dma_device_type": 2 00:11:36.379 }, 00:11:36.379 { 00:11:36.379 "dma_device_id": "system", 00:11:36.379 "dma_device_type": 1 00:11:36.379 }, 00:11:36.379 { 00:11:36.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.379 "dma_device_type": 2 00:11:36.379 } 00:11:36.379 ], 00:11:36.379 "driver_specific": { 00:11:36.379 "raid": { 00:11:36.379 "uuid": "4b609d2d-8b5a-465d-879a-077abe9f8a93", 00:11:36.379 "strip_size_kb": 64, 00:11:36.379 "state": "online", 00:11:36.379 "raid_level": "raid0", 00:11:36.379 "superblock": true, 00:11:36.379 "num_base_bdevs": 3, 00:11:36.379 "num_base_bdevs_discovered": 3, 00:11:36.379 "num_base_bdevs_operational": 3, 00:11:36.379 "base_bdevs_list": [ 00:11:36.379 { 00:11:36.379 "name": "pt1", 00:11:36.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.379 "is_configured": true, 00:11:36.379 "data_offset": 2048, 00:11:36.379 "data_size": 63488 00:11:36.379 }, 00:11:36.379 { 00:11:36.379 "name": "pt2", 00:11:36.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.379 "is_configured": true, 00:11:36.379 "data_offset": 2048, 00:11:36.379 "data_size": 63488 00:11:36.379 }, 00:11:36.379 { 00:11:36.379 "name": "pt3", 00:11:36.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.379 "is_configured": true, 00:11:36.379 "data_offset": 2048, 00:11:36.379 "data_size": 63488 00:11:36.379 } 00:11:36.379 ] 00:11:36.379 } 00:11:36.379 } 00:11:36.379 }' 00:11:36.379 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.379 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:36.379 pt2 00:11:36.379 pt3' 00:11:36.379 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.380 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.380 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.380 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:36.380 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.380 08:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.380 08:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.380 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.638 [2024-11-27 08:43:33.159930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4b609d2d-8b5a-465d-879a-077abe9f8a93 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4b609d2d-8b5a-465d-879a-077abe9f8a93 ']' 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.638 [2024-11-27 08:43:33.211666] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.638 [2024-11-27 08:43:33.211878] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.638 [2024-11-27 08:43:33.212085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.638 [2024-11-27 08:43:33.212277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.638 [2024-11-27 08:43:33.212438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.638 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.639 [2024-11-27 08:43:33.363836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:36.639 [2024-11-27 08:43:33.366586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:36.639 [2024-11-27 08:43:33.366884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:36.639 [2024-11-27 08:43:33.366972] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:36.639 [2024-11-27 08:43:33.367055] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:36.639 [2024-11-27 08:43:33.367088] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:36.639 [2024-11-27 08:43:33.367117] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.639 [2024-11-27 08:43:33.367132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:36.639 request: 00:11:36.639 { 00:11:36.639 "name": "raid_bdev1", 00:11:36.639 "raid_level": "raid0", 00:11:36.639 "base_bdevs": [ 00:11:36.639 "malloc1", 00:11:36.639 "malloc2", 00:11:36.639 "malloc3" 00:11:36.639 ], 00:11:36.639 "strip_size_kb": 64, 00:11:36.639 "superblock": false, 00:11:36.639 "method": "bdev_raid_create", 00:11:36.639 "req_id": 1 00:11:36.639 } 00:11:36.639 Got JSON-RPC error response 00:11:36.639 response: 00:11:36.639 { 00:11:36.639 "code": -17, 00:11:36.639 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:36.639 } 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.639 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.898 [2024-11-27 08:43:33.427883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:36.898 [2024-11-27 08:43:33.428122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.898 [2024-11-27 08:43:33.428198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:36.898 [2024-11-27 08:43:33.428312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.898 [2024-11-27 08:43:33.431629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.898 [2024-11-27 08:43:33.431878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:36.898 [2024-11-27 08:43:33.432114] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:36.898 [2024-11-27 08:43:33.432288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:36.898 pt1 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.898 "name": "raid_bdev1", 00:11:36.898 "uuid": "4b609d2d-8b5a-465d-879a-077abe9f8a93", 00:11:36.898 "strip_size_kb": 64, 00:11:36.898 "state": "configuring", 00:11:36.898 "raid_level": "raid0", 00:11:36.898 "superblock": true, 00:11:36.898 "num_base_bdevs": 3, 00:11:36.898 "num_base_bdevs_discovered": 1, 00:11:36.898 "num_base_bdevs_operational": 3, 00:11:36.898 "base_bdevs_list": [ 00:11:36.898 { 00:11:36.898 "name": "pt1", 00:11:36.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.898 "is_configured": true, 00:11:36.898 "data_offset": 2048, 00:11:36.898 "data_size": 63488 00:11:36.898 }, 00:11:36.898 { 00:11:36.898 "name": null, 00:11:36.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.898 "is_configured": false, 00:11:36.898 "data_offset": 2048, 00:11:36.898 "data_size": 63488 00:11:36.898 }, 00:11:36.898 { 00:11:36.898 "name": null, 00:11:36.898 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.898 "is_configured": false, 00:11:36.898 "data_offset": 2048, 00:11:36.898 "data_size": 63488 00:11:36.898 } 00:11:36.898 ] 00:11:36.898 }' 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.898 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.465 [2024-11-27 08:43:33.952431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:37.465 [2024-11-27 08:43:33.952536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.465 [2024-11-27 08:43:33.952576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:37.465 [2024-11-27 08:43:33.952592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.465 [2024-11-27 08:43:33.953308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.465 [2024-11-27 08:43:33.953366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:37.465 [2024-11-27 08:43:33.953495] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:37.465 [2024-11-27 08:43:33.953529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:37.465 pt2 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.465 [2024-11-27 08:43:33.960454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.465 08:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.465 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.465 "name": "raid_bdev1", 00:11:37.465 "uuid": "4b609d2d-8b5a-465d-879a-077abe9f8a93", 00:11:37.465 "strip_size_kb": 64, 00:11:37.465 "state": "configuring", 00:11:37.465 "raid_level": "raid0", 00:11:37.465 "superblock": true, 00:11:37.465 "num_base_bdevs": 3, 00:11:37.465 "num_base_bdevs_discovered": 1, 00:11:37.465 "num_base_bdevs_operational": 3, 00:11:37.465 "base_bdevs_list": [ 00:11:37.465 { 00:11:37.465 "name": "pt1", 00:11:37.465 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:37.465 "is_configured": true, 00:11:37.465 "data_offset": 2048, 00:11:37.465 "data_size": 63488 00:11:37.465 }, 00:11:37.465 { 00:11:37.465 "name": null, 00:11:37.465 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.465 "is_configured": false, 00:11:37.465 "data_offset": 0, 00:11:37.465 "data_size": 63488 00:11:37.465 }, 00:11:37.465 { 00:11:37.465 "name": null, 00:11:37.465 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.465 "is_configured": false, 00:11:37.465 "data_offset": 2048, 00:11:37.465 "data_size": 63488 00:11:37.465 } 00:11:37.465 ] 00:11:37.465 }' 00:11:37.465 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.465 08:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.032 [2024-11-27 08:43:34.504614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:38.032 [2024-11-27 08:43:34.504716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.032 [2024-11-27 08:43:34.504749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:38.032 [2024-11-27 08:43:34.504768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.032 [2024-11-27 08:43:34.505470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.032 [2024-11-27 08:43:34.505503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:38.032 [2024-11-27 08:43:34.505618] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:38.032 [2024-11-27 08:43:34.505665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:38.032 pt2 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.032 [2024-11-27 08:43:34.512526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:38.032 [2024-11-27 08:43:34.512580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.032 [2024-11-27 08:43:34.512609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:38.032 [2024-11-27 08:43:34.512626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.032 [2024-11-27 08:43:34.513136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.032 [2024-11-27 08:43:34.513178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:38.032 [2024-11-27 08:43:34.513251] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:38.032 [2024-11-27 08:43:34.513283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:38.032 [2024-11-27 08:43:34.513467] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:38.032 [2024-11-27 08:43:34.513492] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:38.032 [2024-11-27 08:43:34.513821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:38.032 [2024-11-27 08:43:34.514030] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:38.032 [2024-11-27 08:43:34.514044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:38.032 [2024-11-27 08:43:34.514240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.032 pt3 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.032 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.032 "name": "raid_bdev1", 00:11:38.032 "uuid": "4b609d2d-8b5a-465d-879a-077abe9f8a93", 00:11:38.032 "strip_size_kb": 64, 00:11:38.032 "state": "online", 00:11:38.032 "raid_level": "raid0", 00:11:38.032 "superblock": true, 00:11:38.032 "num_base_bdevs": 3, 00:11:38.032 "num_base_bdevs_discovered": 3, 00:11:38.032 "num_base_bdevs_operational": 3, 00:11:38.032 "base_bdevs_list": [ 00:11:38.033 { 00:11:38.033 "name": "pt1", 00:11:38.033 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.033 "is_configured": true, 00:11:38.033 "data_offset": 2048, 00:11:38.033 "data_size": 63488 00:11:38.033 }, 00:11:38.033 { 00:11:38.033 "name": "pt2", 00:11:38.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.033 "is_configured": true, 00:11:38.033 "data_offset": 2048, 00:11:38.033 "data_size": 63488 00:11:38.033 }, 00:11:38.033 { 00:11:38.033 "name": "pt3", 00:11:38.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.033 "is_configured": true, 00:11:38.033 "data_offset": 2048, 00:11:38.033 "data_size": 63488 00:11:38.033 } 00:11:38.033 ] 00:11:38.033 }' 00:11:38.033 08:43:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.033 08:43:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.291 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:38.291 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:38.291 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:38.291 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:38.291 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:38.291 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:38.291 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.291 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:38.291 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.291 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.291 [2024-11-27 08:43:35.041199] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.550 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.550 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:38.550 "name": "raid_bdev1", 00:11:38.550 "aliases": [ 00:11:38.550 "4b609d2d-8b5a-465d-879a-077abe9f8a93" 00:11:38.550 ], 00:11:38.550 "product_name": "Raid Volume", 00:11:38.550 "block_size": 512, 00:11:38.550 "num_blocks": 190464, 00:11:38.550 "uuid": "4b609d2d-8b5a-465d-879a-077abe9f8a93", 00:11:38.550 "assigned_rate_limits": { 00:11:38.550 "rw_ios_per_sec": 0, 00:11:38.550 "rw_mbytes_per_sec": 0, 00:11:38.550 "r_mbytes_per_sec": 0, 00:11:38.550 "w_mbytes_per_sec": 0 00:11:38.550 }, 00:11:38.550 "claimed": false, 00:11:38.550 "zoned": false, 00:11:38.550 "supported_io_types": { 00:11:38.550 "read": true, 00:11:38.550 "write": true, 00:11:38.550 "unmap": true, 00:11:38.550 "flush": true, 00:11:38.550 "reset": true, 00:11:38.550 "nvme_admin": false, 00:11:38.550 "nvme_io": false, 00:11:38.550 "nvme_io_md": false, 00:11:38.550 "write_zeroes": true, 00:11:38.550 "zcopy": false, 00:11:38.550 "get_zone_info": false, 00:11:38.550 "zone_management": false, 00:11:38.550 "zone_append": false, 00:11:38.550 "compare": false, 00:11:38.550 "compare_and_write": false, 00:11:38.550 "abort": false, 00:11:38.550 "seek_hole": false, 00:11:38.550 "seek_data": false, 00:11:38.550 "copy": false, 00:11:38.550 "nvme_iov_md": false 00:11:38.550 }, 00:11:38.550 "memory_domains": [ 00:11:38.550 { 00:11:38.550 "dma_device_id": "system", 00:11:38.550 "dma_device_type": 1 00:11:38.550 }, 00:11:38.550 { 00:11:38.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.550 "dma_device_type": 2 00:11:38.550 }, 00:11:38.550 { 00:11:38.550 "dma_device_id": "system", 00:11:38.550 "dma_device_type": 1 00:11:38.550 }, 00:11:38.550 { 00:11:38.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.550 "dma_device_type": 2 00:11:38.550 }, 00:11:38.550 { 00:11:38.550 "dma_device_id": "system", 00:11:38.550 "dma_device_type": 1 00:11:38.550 }, 00:11:38.550 { 00:11:38.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.550 "dma_device_type": 2 00:11:38.550 } 00:11:38.550 ], 00:11:38.550 "driver_specific": { 00:11:38.550 "raid": { 00:11:38.550 "uuid": "4b609d2d-8b5a-465d-879a-077abe9f8a93", 00:11:38.550 "strip_size_kb": 64, 00:11:38.550 "state": "online", 00:11:38.550 "raid_level": "raid0", 00:11:38.550 "superblock": true, 00:11:38.550 "num_base_bdevs": 3, 00:11:38.550 "num_base_bdevs_discovered": 3, 00:11:38.550 "num_base_bdevs_operational": 3, 00:11:38.550 "base_bdevs_list": [ 00:11:38.550 { 00:11:38.550 "name": "pt1", 00:11:38.550 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.550 "is_configured": true, 00:11:38.550 "data_offset": 2048, 00:11:38.550 "data_size": 63488 00:11:38.550 }, 00:11:38.550 { 00:11:38.550 "name": "pt2", 00:11:38.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.550 "is_configured": true, 00:11:38.550 "data_offset": 2048, 00:11:38.550 "data_size": 63488 00:11:38.550 }, 00:11:38.550 { 00:11:38.550 "name": "pt3", 00:11:38.550 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.550 "is_configured": true, 00:11:38.550 "data_offset": 2048, 00:11:38.550 "data_size": 63488 00:11:38.550 } 00:11:38.550 ] 00:11:38.550 } 00:11:38.550 } 00:11:38.550 }' 00:11:38.550 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.550 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:38.550 pt2 00:11:38.550 pt3' 00:11:38.550 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.551 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.809 [2024-11-27 08:43:35.349265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4b609d2d-8b5a-465d-879a-077abe9f8a93 '!=' 4b609d2d-8b5a-465d-879a-077abe9f8a93 ']' 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65199 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' -z 65199 ']' 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # kill -0 65199 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # uname 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 65199 00:11:38.809 killing process with pid 65199 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 65199' 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # kill 65199 00:11:38.809 08:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@975 -- # wait 65199 00:11:38.809 [2024-11-27 08:43:35.433718] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.809 [2024-11-27 08:43:35.433899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.809 [2024-11-27 08:43:35.433999] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.809 [2024-11-27 08:43:35.434020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:39.067 [2024-11-27 08:43:35.725872] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:40.443 08:43:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:40.443 00:11:40.443 real 0m5.841s 00:11:40.443 user 0m8.714s 00:11:40.443 sys 0m0.918s 00:11:40.443 08:43:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:11:40.443 ************************************ 00:11:40.443 END TEST raid_superblock_test 00:11:40.443 ************************************ 00:11:40.443 08:43:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.443 08:43:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:11:40.443 08:43:36 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:11:40.443 08:43:36 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:11:40.443 08:43:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:40.443 ************************************ 00:11:40.443 START TEST raid_read_error_test 00:11:40.443 ************************************ 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test raid0 3 read 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nPZht72KlC 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65458 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65458 00:11:40.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # '[' -z 65458 ']' 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:11:40.443 08:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.443 [2024-11-27 08:43:37.070551] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:11:40.443 [2024-11-27 08:43:37.070743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65458 ] 00:11:40.702 [2024-11-27 08:43:37.263140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.960 [2024-11-27 08:43:37.465013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.960 [2024-11-27 08:43:37.703475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.960 [2024-11-27 08:43:37.703533] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@865 -- # return 0 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.527 BaseBdev1_malloc 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.527 true 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.527 [2024-11-27 08:43:38.117663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:41.527 [2024-11-27 08:43:38.117911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.527 [2024-11-27 08:43:38.117956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:41.527 [2024-11-27 08:43:38.117986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.527 [2024-11-27 08:43:38.121127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.527 [2024-11-27 08:43:38.121316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:41.527 BaseBdev1 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.527 BaseBdev2_malloc 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.527 true 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.527 [2024-11-27 08:43:38.183261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:41.527 [2024-11-27 08:43:38.183370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.527 [2024-11-27 08:43:38.183404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:41.527 [2024-11-27 08:43:38.183423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.527 [2024-11-27 08:43:38.186513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.527 [2024-11-27 08:43:38.186567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:41.527 BaseBdev2 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.527 BaseBdev3_malloc 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.527 true 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.527 [2024-11-27 08:43:38.254916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:41.527 [2024-11-27 08:43:38.255013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.527 [2024-11-27 08:43:38.255058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:41.527 [2024-11-27 08:43:38.255089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.527 [2024-11-27 08:43:38.258773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.527 [2024-11-27 08:43:38.258834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:41.527 BaseBdev3 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.527 [2024-11-27 08:43:38.263089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.527 [2024-11-27 08:43:38.265755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.527 [2024-11-27 08:43:38.266017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:41.527 [2024-11-27 08:43:38.266354] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:41.527 [2024-11-27 08:43:38.266383] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:41.527 [2024-11-27 08:43:38.266723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:41.527 [2024-11-27 08:43:38.266958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:41.527 [2024-11-27 08:43:38.266984] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:41.527 [2024-11-27 08:43:38.267222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.527 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.786 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.786 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.786 "name": "raid_bdev1", 00:11:41.786 "uuid": "59399748-3869-4990-89f7-3541933c0f7f", 00:11:41.786 "strip_size_kb": 64, 00:11:41.786 "state": "online", 00:11:41.786 "raid_level": "raid0", 00:11:41.786 "superblock": true, 00:11:41.786 "num_base_bdevs": 3, 00:11:41.786 "num_base_bdevs_discovered": 3, 00:11:41.786 "num_base_bdevs_operational": 3, 00:11:41.786 "base_bdevs_list": [ 00:11:41.786 { 00:11:41.786 "name": "BaseBdev1", 00:11:41.786 "uuid": "4c5cf424-b3e5-5bb3-9603-f462e42407c7", 00:11:41.786 "is_configured": true, 00:11:41.786 "data_offset": 2048, 00:11:41.786 "data_size": 63488 00:11:41.787 }, 00:11:41.787 { 00:11:41.787 "name": "BaseBdev2", 00:11:41.787 "uuid": "ed5e85cb-a605-55b8-b464-e06eeba1da6a", 00:11:41.787 "is_configured": true, 00:11:41.787 "data_offset": 2048, 00:11:41.787 "data_size": 63488 00:11:41.787 }, 00:11:41.787 { 00:11:41.787 "name": "BaseBdev3", 00:11:41.787 "uuid": "ab180566-defe-5367-b221-b1cc79875f0b", 00:11:41.787 "is_configured": true, 00:11:41.787 "data_offset": 2048, 00:11:41.787 "data_size": 63488 00:11:41.787 } 00:11:41.787 ] 00:11:41.787 }' 00:11:41.787 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.787 08:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.045 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:42.045 08:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:42.304 [2024-11-27 08:43:38.896899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.244 "name": "raid_bdev1", 00:11:43.244 "uuid": "59399748-3869-4990-89f7-3541933c0f7f", 00:11:43.244 "strip_size_kb": 64, 00:11:43.244 "state": "online", 00:11:43.244 "raid_level": "raid0", 00:11:43.244 "superblock": true, 00:11:43.244 "num_base_bdevs": 3, 00:11:43.244 "num_base_bdevs_discovered": 3, 00:11:43.244 "num_base_bdevs_operational": 3, 00:11:43.244 "base_bdevs_list": [ 00:11:43.244 { 00:11:43.244 "name": "BaseBdev1", 00:11:43.244 "uuid": "4c5cf424-b3e5-5bb3-9603-f462e42407c7", 00:11:43.244 "is_configured": true, 00:11:43.244 "data_offset": 2048, 00:11:43.244 "data_size": 63488 00:11:43.244 }, 00:11:43.244 { 00:11:43.244 "name": "BaseBdev2", 00:11:43.244 "uuid": "ed5e85cb-a605-55b8-b464-e06eeba1da6a", 00:11:43.244 "is_configured": true, 00:11:43.244 "data_offset": 2048, 00:11:43.244 "data_size": 63488 00:11:43.244 }, 00:11:43.244 { 00:11:43.244 "name": "BaseBdev3", 00:11:43.244 "uuid": "ab180566-defe-5367-b221-b1cc79875f0b", 00:11:43.244 "is_configured": true, 00:11:43.244 "data_offset": 2048, 00:11:43.244 "data_size": 63488 00:11:43.244 } 00:11:43.244 ] 00:11:43.244 }' 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.244 08:43:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.810 08:43:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.810 08:43:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.810 08:43:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.810 [2024-11-27 08:43:40.303553] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.810 [2024-11-27 08:43:40.303740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.810 [2024-11-27 08:43:40.307251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.810 [2024-11-27 08:43:40.307476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.810 [2024-11-27 08:43:40.307587] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.810 [2024-11-27 08:43:40.307836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:43.810 { 00:11:43.810 "results": [ 00:11:43.810 { 00:11:43.810 "job": "raid_bdev1", 00:11:43.810 "core_mask": "0x1", 00:11:43.810 "workload": "randrw", 00:11:43.810 "percentage": 50, 00:11:43.810 "status": "finished", 00:11:43.810 "queue_depth": 1, 00:11:43.810 "io_size": 131072, 00:11:43.810 "runtime": 1.404049, 00:11:43.810 "iops": 9325.885350155158, 00:11:43.810 "mibps": 1165.7356687693948, 00:11:43.810 "io_failed": 1, 00:11:43.810 "io_timeout": 0, 00:11:43.810 "avg_latency_us": 150.1188922906036, 00:11:43.810 "min_latency_us": 37.236363636363635, 00:11:43.810 "max_latency_us": 1854.370909090909 00:11:43.810 } 00:11:43.810 ], 00:11:43.810 "core_count": 1 00:11:43.810 } 00:11:43.810 08:43:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.810 08:43:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65458 00:11:43.810 08:43:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' -z 65458 ']' 00:11:43.810 08:43:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # kill -0 65458 00:11:43.810 08:43:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # uname 00:11:43.810 08:43:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:11:43.810 08:43:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 65458 00:11:43.810 killing process with pid 65458 00:11:43.810 08:43:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:11:43.810 08:43:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:11:43.810 08:43:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 65458' 00:11:43.810 08:43:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # kill 65458 00:11:43.810 [2024-11-27 08:43:40.342008] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.810 08:43:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@975 -- # wait 65458 00:11:43.810 [2024-11-27 08:43:40.562548] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:45.186 08:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nPZht72KlC 00:11:45.186 08:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:45.186 08:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:45.186 08:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:45.186 08:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:45.186 08:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:45.186 08:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:45.186 08:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:45.186 00:11:45.186 real 0m4.849s 00:11:45.186 user 0m5.947s 00:11:45.186 sys 0m0.645s 00:11:45.186 08:43:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:11:45.186 ************************************ 00:11:45.186 END TEST raid_read_error_test 00:11:45.186 ************************************ 00:11:45.186 08:43:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.186 08:43:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:11:45.186 08:43:41 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:11:45.186 08:43:41 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:11:45.186 08:43:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.186 ************************************ 00:11:45.186 START TEST raid_write_error_test 00:11:45.186 ************************************ 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test raid0 3 write 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:45.186 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:45.187 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:45.187 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:45.187 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:45.187 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:45.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.187 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qOvPrydO9F 00:11:45.187 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65609 00:11:45.187 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65609 00:11:45.187 08:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # '[' -z 65609 ']' 00:11:45.187 08:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:45.187 08:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.187 08:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:11:45.187 08:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.187 08:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:11:45.187 08:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.187 [2024-11-27 08:43:41.936746] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:11:45.187 [2024-11-27 08:43:41.936949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65609 ] 00:11:45.444 [2024-11-27 08:43:42.121737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.703 [2024-11-27 08:43:42.269419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.963 [2024-11-27 08:43:42.493933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.963 [2024-11-27 08:43:42.494039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@865 -- # return 0 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.233 BaseBdev1_malloc 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.233 true 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.233 [2024-11-27 08:43:42.951918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:46.233 [2024-11-27 08:43:42.952004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.233 [2024-11-27 08:43:42.952037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:46.233 [2024-11-27 08:43:42.952056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.233 [2024-11-27 08:43:42.955115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.233 [2024-11-27 08:43:42.955328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:46.233 BaseBdev1 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.233 08:43:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.493 BaseBdev2_malloc 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.493 true 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.493 [2024-11-27 08:43:43.020232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:46.493 [2024-11-27 08:43:43.020320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.493 [2024-11-27 08:43:43.020361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:46.493 [2024-11-27 08:43:43.020381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.493 [2024-11-27 08:43:43.023441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.493 [2024-11-27 08:43:43.023496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:46.493 BaseBdev2 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.493 BaseBdev3_malloc 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.493 true 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.493 [2024-11-27 08:43:43.098734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:46.493 [2024-11-27 08:43:43.098825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.493 [2024-11-27 08:43:43.098858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:46.493 [2024-11-27 08:43:43.098878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.493 [2024-11-27 08:43:43.102040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.493 [2024-11-27 08:43:43.102094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:46.493 BaseBdev3 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.493 [2024-11-27 08:43:43.110941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.493 [2024-11-27 08:43:43.113715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.493 [2024-11-27 08:43:43.113841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.493 [2024-11-27 08:43:43.114223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:46.493 [2024-11-27 08:43:43.114247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:46.493 [2024-11-27 08:43:43.114690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:46.493 [2024-11-27 08:43:43.114945] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:46.493 [2024-11-27 08:43:43.114971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:46.493 [2024-11-27 08:43:43.115254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.493 "name": "raid_bdev1", 00:11:46.493 "uuid": "89cf377c-ee52-4e80-b17a-c6cc550d8897", 00:11:46.493 "strip_size_kb": 64, 00:11:46.493 "state": "online", 00:11:46.493 "raid_level": "raid0", 00:11:46.493 "superblock": true, 00:11:46.493 "num_base_bdevs": 3, 00:11:46.493 "num_base_bdevs_discovered": 3, 00:11:46.493 "num_base_bdevs_operational": 3, 00:11:46.493 "base_bdevs_list": [ 00:11:46.493 { 00:11:46.493 "name": "BaseBdev1", 00:11:46.493 "uuid": "5db37f9f-c57e-5486-82f7-0a49e1099176", 00:11:46.493 "is_configured": true, 00:11:46.493 "data_offset": 2048, 00:11:46.493 "data_size": 63488 00:11:46.493 }, 00:11:46.493 { 00:11:46.493 "name": "BaseBdev2", 00:11:46.493 "uuid": "41253a09-50dc-52e5-868b-ad456ef3a251", 00:11:46.493 "is_configured": true, 00:11:46.493 "data_offset": 2048, 00:11:46.493 "data_size": 63488 00:11:46.493 }, 00:11:46.493 { 00:11:46.493 "name": "BaseBdev3", 00:11:46.493 "uuid": "31428ce7-851a-5cb5-adcc-933500646d1d", 00:11:46.493 "is_configured": true, 00:11:46.493 "data_offset": 2048, 00:11:46.493 "data_size": 63488 00:11:46.493 } 00:11:46.493 ] 00:11:46.493 }' 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.493 08:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.062 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:47.062 08:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:47.062 [2024-11-27 08:43:43.704895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.999 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.999 "name": "raid_bdev1", 00:11:47.999 "uuid": "89cf377c-ee52-4e80-b17a-c6cc550d8897", 00:11:47.999 "strip_size_kb": 64, 00:11:47.999 "state": "online", 00:11:47.999 "raid_level": "raid0", 00:11:47.999 "superblock": true, 00:11:47.999 "num_base_bdevs": 3, 00:11:47.999 "num_base_bdevs_discovered": 3, 00:11:47.999 "num_base_bdevs_operational": 3, 00:11:47.999 "base_bdevs_list": [ 00:11:47.999 { 00:11:47.999 "name": "BaseBdev1", 00:11:47.999 "uuid": "5db37f9f-c57e-5486-82f7-0a49e1099176", 00:11:47.999 "is_configured": true, 00:11:47.999 "data_offset": 2048, 00:11:47.999 "data_size": 63488 00:11:47.999 }, 00:11:47.999 { 00:11:47.999 "name": "BaseBdev2", 00:11:47.999 "uuid": "41253a09-50dc-52e5-868b-ad456ef3a251", 00:11:47.999 "is_configured": true, 00:11:47.999 "data_offset": 2048, 00:11:47.999 "data_size": 63488 00:11:48.000 }, 00:11:48.000 { 00:11:48.000 "name": "BaseBdev3", 00:11:48.000 "uuid": "31428ce7-851a-5cb5-adcc-933500646d1d", 00:11:48.000 "is_configured": true, 00:11:48.000 "data_offset": 2048, 00:11:48.000 "data_size": 63488 00:11:48.000 } 00:11:48.000 ] 00:11:48.000 }' 00:11:48.000 08:43:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.000 08:43:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.566 08:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.566 08:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.566 08:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.566 [2024-11-27 08:43:45.140237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.566 [2024-11-27 08:43:45.140279] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.566 [2024-11-27 08:43:45.143648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.566 [2024-11-27 08:43:45.143713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.566 [2024-11-27 08:43:45.143780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.566 [2024-11-27 08:43:45.143797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:48.566 { 00:11:48.566 "results": [ 00:11:48.566 { 00:11:48.566 "job": "raid_bdev1", 00:11:48.566 "core_mask": "0x1", 00:11:48.566 "workload": "randrw", 00:11:48.566 "percentage": 50, 00:11:48.566 "status": "finished", 00:11:48.566 "queue_depth": 1, 00:11:48.566 "io_size": 131072, 00:11:48.566 "runtime": 1.432543, 00:11:48.566 "iops": 9844.032604954964, 00:11:48.566 "mibps": 1230.5040756193705, 00:11:48.566 "io_failed": 1, 00:11:48.566 "io_timeout": 0, 00:11:48.566 "avg_latency_us": 143.1317282589778, 00:11:48.566 "min_latency_us": 43.054545454545455, 00:11:48.566 "max_latency_us": 1839.4763636363637 00:11:48.566 } 00:11:48.566 ], 00:11:48.566 "core_count": 1 00:11:48.566 } 00:11:48.566 08:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.566 08:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65609 00:11:48.566 08:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' -z 65609 ']' 00:11:48.566 08:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # kill -0 65609 00:11:48.567 08:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # uname 00:11:48.567 08:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:11:48.567 08:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 65609 00:11:48.567 killing process with pid 65609 00:11:48.567 08:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:11:48.567 08:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:11:48.567 08:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 65609' 00:11:48.567 08:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # kill 65609 00:11:48.567 [2024-11-27 08:43:45.180288] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.567 08:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@975 -- # wait 65609 00:11:48.825 [2024-11-27 08:43:45.398901] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.202 08:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qOvPrydO9F 00:11:50.202 08:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:50.202 08:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:50.202 08:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:50.202 08:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:50.202 08:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.202 08:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:50.202 ************************************ 00:11:50.202 END TEST raid_write_error_test 00:11:50.202 ************************************ 00:11:50.202 08:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:50.202 00:11:50.202 real 0m4.783s 00:11:50.202 user 0m5.783s 00:11:50.202 sys 0m0.649s 00:11:50.202 08:43:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:11:50.202 08:43:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.202 08:43:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:50.202 08:43:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:11:50.202 08:43:46 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:11:50.202 08:43:46 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:11:50.202 08:43:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.202 ************************************ 00:11:50.202 START TEST raid_state_function_test 00:11:50.202 ************************************ 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # raid_state_function_test concat 3 false 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.202 Process raid pid: 65747 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65747 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65747' 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65747 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # '[' -z 65747 ']' 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:11:50.202 08:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.202 [2024-11-27 08:43:46.770088] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:11:50.202 [2024-11-27 08:43:46.770918] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.461 [2024-11-27 08:43:46.970798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.461 [2024-11-27 08:43:47.119648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.721 [2024-11-27 08:43:47.349655] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.721 [2024-11-27 08:43:47.349970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.311 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@865 -- # return 0 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.312 [2024-11-27 08:43:47.773108] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.312 [2024-11-27 08:43:47.773207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.312 [2024-11-27 08:43:47.773226] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.312 [2024-11-27 08:43:47.773244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.312 [2024-11-27 08:43:47.773255] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.312 [2024-11-27 08:43:47.773271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.312 "name": "Existed_Raid", 00:11:51.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.312 "strip_size_kb": 64, 00:11:51.312 "state": "configuring", 00:11:51.312 "raid_level": "concat", 00:11:51.312 "superblock": false, 00:11:51.312 "num_base_bdevs": 3, 00:11:51.312 "num_base_bdevs_discovered": 0, 00:11:51.312 "num_base_bdevs_operational": 3, 00:11:51.312 "base_bdevs_list": [ 00:11:51.312 { 00:11:51.312 "name": "BaseBdev1", 00:11:51.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.312 "is_configured": false, 00:11:51.312 "data_offset": 0, 00:11:51.312 "data_size": 0 00:11:51.312 }, 00:11:51.312 { 00:11:51.312 "name": "BaseBdev2", 00:11:51.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.312 "is_configured": false, 00:11:51.312 "data_offset": 0, 00:11:51.312 "data_size": 0 00:11:51.312 }, 00:11:51.312 { 00:11:51.312 "name": "BaseBdev3", 00:11:51.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.312 "is_configured": false, 00:11:51.312 "data_offset": 0, 00:11:51.312 "data_size": 0 00:11:51.312 } 00:11:51.312 ] 00:11:51.312 }' 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.312 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.571 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:51.571 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.571 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.571 [2024-11-27 08:43:48.313206] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.571 [2024-11-27 08:43:48.313272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:51.571 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.571 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:51.571 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.571 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.571 [2024-11-27 08:43:48.321154] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.571 [2024-11-27 08:43:48.321220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.571 [2024-11-27 08:43:48.321237] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.571 [2024-11-27 08:43:48.321254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.571 [2024-11-27 08:43:48.321264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.571 [2024-11-27 08:43:48.321279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.571 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.571 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:51.571 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.571 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.830 [2024-11-27 08:43:48.370190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.830 BaseBdev1 00:11:51.830 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.830 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:51.830 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:11:51.830 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:51.830 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:11:51.830 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:51.830 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:51.830 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:51.830 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.830 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.830 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.830 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:51.830 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.830 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.830 [ 00:11:51.830 { 00:11:51.830 "name": "BaseBdev1", 00:11:51.830 "aliases": [ 00:11:51.830 "7cebb732-812d-4536-94cb-dc972fdbe2d6" 00:11:51.830 ], 00:11:51.830 "product_name": "Malloc disk", 00:11:51.830 "block_size": 512, 00:11:51.830 "num_blocks": 65536, 00:11:51.830 "uuid": "7cebb732-812d-4536-94cb-dc972fdbe2d6", 00:11:51.830 "assigned_rate_limits": { 00:11:51.830 "rw_ios_per_sec": 0, 00:11:51.830 "rw_mbytes_per_sec": 0, 00:11:51.830 "r_mbytes_per_sec": 0, 00:11:51.830 "w_mbytes_per_sec": 0 00:11:51.830 }, 00:11:51.830 "claimed": true, 00:11:51.830 "claim_type": "exclusive_write", 00:11:51.830 "zoned": false, 00:11:51.830 "supported_io_types": { 00:11:51.830 "read": true, 00:11:51.830 "write": true, 00:11:51.830 "unmap": true, 00:11:51.830 "flush": true, 00:11:51.830 "reset": true, 00:11:51.830 "nvme_admin": false, 00:11:51.830 "nvme_io": false, 00:11:51.830 "nvme_io_md": false, 00:11:51.830 "write_zeroes": true, 00:11:51.830 "zcopy": true, 00:11:51.830 "get_zone_info": false, 00:11:51.830 "zone_management": false, 00:11:51.830 "zone_append": false, 00:11:51.830 "compare": false, 00:11:51.830 "compare_and_write": false, 00:11:51.830 "abort": true, 00:11:51.830 "seek_hole": false, 00:11:51.830 "seek_data": false, 00:11:51.830 "copy": true, 00:11:51.830 "nvme_iov_md": false 00:11:51.830 }, 00:11:51.830 "memory_domains": [ 00:11:51.830 { 00:11:51.830 "dma_device_id": "system", 00:11:51.830 "dma_device_type": 1 00:11:51.830 }, 00:11:51.830 { 00:11:51.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.830 "dma_device_type": 2 00:11:51.830 } 00:11:51.830 ], 00:11:51.831 "driver_specific": {} 00:11:51.831 } 00:11:51.831 ] 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.831 "name": "Existed_Raid", 00:11:51.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.831 "strip_size_kb": 64, 00:11:51.831 "state": "configuring", 00:11:51.831 "raid_level": "concat", 00:11:51.831 "superblock": false, 00:11:51.831 "num_base_bdevs": 3, 00:11:51.831 "num_base_bdevs_discovered": 1, 00:11:51.831 "num_base_bdevs_operational": 3, 00:11:51.831 "base_bdevs_list": [ 00:11:51.831 { 00:11:51.831 "name": "BaseBdev1", 00:11:51.831 "uuid": "7cebb732-812d-4536-94cb-dc972fdbe2d6", 00:11:51.831 "is_configured": true, 00:11:51.831 "data_offset": 0, 00:11:51.831 "data_size": 65536 00:11:51.831 }, 00:11:51.831 { 00:11:51.831 "name": "BaseBdev2", 00:11:51.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.831 "is_configured": false, 00:11:51.831 "data_offset": 0, 00:11:51.831 "data_size": 0 00:11:51.831 }, 00:11:51.831 { 00:11:51.831 "name": "BaseBdev3", 00:11:51.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.831 "is_configured": false, 00:11:51.831 "data_offset": 0, 00:11:51.831 "data_size": 0 00:11:51.831 } 00:11:51.831 ] 00:11:51.831 }' 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.831 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.399 [2024-11-27 08:43:48.878429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.399 [2024-11-27 08:43:48.878528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.399 [2024-11-27 08:43:48.886530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.399 [2024-11-27 08:43:48.889514] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.399 [2024-11-27 08:43:48.889747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.399 [2024-11-27 08:43:48.889782] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.399 [2024-11-27 08:43:48.889803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.399 "name": "Existed_Raid", 00:11:52.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.399 "strip_size_kb": 64, 00:11:52.399 "state": "configuring", 00:11:52.399 "raid_level": "concat", 00:11:52.399 "superblock": false, 00:11:52.399 "num_base_bdevs": 3, 00:11:52.399 "num_base_bdevs_discovered": 1, 00:11:52.399 "num_base_bdevs_operational": 3, 00:11:52.399 "base_bdevs_list": [ 00:11:52.399 { 00:11:52.399 "name": "BaseBdev1", 00:11:52.399 "uuid": "7cebb732-812d-4536-94cb-dc972fdbe2d6", 00:11:52.399 "is_configured": true, 00:11:52.399 "data_offset": 0, 00:11:52.399 "data_size": 65536 00:11:52.399 }, 00:11:52.399 { 00:11:52.399 "name": "BaseBdev2", 00:11:52.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.399 "is_configured": false, 00:11:52.399 "data_offset": 0, 00:11:52.399 "data_size": 0 00:11:52.399 }, 00:11:52.399 { 00:11:52.399 "name": "BaseBdev3", 00:11:52.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.399 "is_configured": false, 00:11:52.399 "data_offset": 0, 00:11:52.399 "data_size": 0 00:11:52.399 } 00:11:52.399 ] 00:11:52.399 }' 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.399 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.657 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:52.657 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.657 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.915 [2024-11-27 08:43:49.458514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.915 BaseBdev2 00:11:52.915 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.915 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:52.915 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:11:52.915 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:52.915 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:11:52.915 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:52.915 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:52.915 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:52.915 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.915 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.915 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.915 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:52.915 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.915 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.915 [ 00:11:52.915 { 00:11:52.915 "name": "BaseBdev2", 00:11:52.915 "aliases": [ 00:11:52.915 "c7a06673-759d-43c2-9176-e30beba9666d" 00:11:52.915 ], 00:11:52.915 "product_name": "Malloc disk", 00:11:52.915 "block_size": 512, 00:11:52.915 "num_blocks": 65536, 00:11:52.915 "uuid": "c7a06673-759d-43c2-9176-e30beba9666d", 00:11:52.915 "assigned_rate_limits": { 00:11:52.915 "rw_ios_per_sec": 0, 00:11:52.915 "rw_mbytes_per_sec": 0, 00:11:52.915 "r_mbytes_per_sec": 0, 00:11:52.915 "w_mbytes_per_sec": 0 00:11:52.915 }, 00:11:52.915 "claimed": true, 00:11:52.915 "claim_type": "exclusive_write", 00:11:52.915 "zoned": false, 00:11:52.915 "supported_io_types": { 00:11:52.915 "read": true, 00:11:52.915 "write": true, 00:11:52.915 "unmap": true, 00:11:52.915 "flush": true, 00:11:52.915 "reset": true, 00:11:52.915 "nvme_admin": false, 00:11:52.915 "nvme_io": false, 00:11:52.915 "nvme_io_md": false, 00:11:52.915 "write_zeroes": true, 00:11:52.915 "zcopy": true, 00:11:52.915 "get_zone_info": false, 00:11:52.915 "zone_management": false, 00:11:52.915 "zone_append": false, 00:11:52.915 "compare": false, 00:11:52.915 "compare_and_write": false, 00:11:52.915 "abort": true, 00:11:52.915 "seek_hole": false, 00:11:52.915 "seek_data": false, 00:11:52.915 "copy": true, 00:11:52.915 "nvme_iov_md": false 00:11:52.915 }, 00:11:52.915 "memory_domains": [ 00:11:52.915 { 00:11:52.915 "dma_device_id": "system", 00:11:52.915 "dma_device_type": 1 00:11:52.915 }, 00:11:52.915 { 00:11:52.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.915 "dma_device_type": 2 00:11:52.915 } 00:11:52.915 ], 00:11:52.915 "driver_specific": {} 00:11:52.915 } 00:11:52.915 ] 00:11:52.915 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.916 "name": "Existed_Raid", 00:11:52.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.916 "strip_size_kb": 64, 00:11:52.916 "state": "configuring", 00:11:52.916 "raid_level": "concat", 00:11:52.916 "superblock": false, 00:11:52.916 "num_base_bdevs": 3, 00:11:52.916 "num_base_bdevs_discovered": 2, 00:11:52.916 "num_base_bdevs_operational": 3, 00:11:52.916 "base_bdevs_list": [ 00:11:52.916 { 00:11:52.916 "name": "BaseBdev1", 00:11:52.916 "uuid": "7cebb732-812d-4536-94cb-dc972fdbe2d6", 00:11:52.916 "is_configured": true, 00:11:52.916 "data_offset": 0, 00:11:52.916 "data_size": 65536 00:11:52.916 }, 00:11:52.916 { 00:11:52.916 "name": "BaseBdev2", 00:11:52.916 "uuid": "c7a06673-759d-43c2-9176-e30beba9666d", 00:11:52.916 "is_configured": true, 00:11:52.916 "data_offset": 0, 00:11:52.916 "data_size": 65536 00:11:52.916 }, 00:11:52.916 { 00:11:52.916 "name": "BaseBdev3", 00:11:52.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.916 "is_configured": false, 00:11:52.916 "data_offset": 0, 00:11:52.916 "data_size": 0 00:11:52.916 } 00:11:52.916 ] 00:11:52.916 }' 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.916 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.483 [2024-11-27 08:43:50.066290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.483 [2024-11-27 08:43:50.066397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:53.483 [2024-11-27 08:43:50.066421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:53.483 [2024-11-27 08:43:50.066798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:53.483 [2024-11-27 08:43:50.067053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:53.483 [2024-11-27 08:43:50.067072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:53.483 [2024-11-27 08:43:50.067468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.483 BaseBdev3 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.483 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.483 [ 00:11:53.483 { 00:11:53.483 "name": "BaseBdev3", 00:11:53.483 "aliases": [ 00:11:53.483 "b45a6c26-5264-4ab3-9c8a-645f3547cca4" 00:11:53.483 ], 00:11:53.483 "product_name": "Malloc disk", 00:11:53.483 "block_size": 512, 00:11:53.483 "num_blocks": 65536, 00:11:53.483 "uuid": "b45a6c26-5264-4ab3-9c8a-645f3547cca4", 00:11:53.483 "assigned_rate_limits": { 00:11:53.483 "rw_ios_per_sec": 0, 00:11:53.483 "rw_mbytes_per_sec": 0, 00:11:53.483 "r_mbytes_per_sec": 0, 00:11:53.483 "w_mbytes_per_sec": 0 00:11:53.484 }, 00:11:53.484 "claimed": true, 00:11:53.484 "claim_type": "exclusive_write", 00:11:53.484 "zoned": false, 00:11:53.484 "supported_io_types": { 00:11:53.484 "read": true, 00:11:53.484 "write": true, 00:11:53.484 "unmap": true, 00:11:53.484 "flush": true, 00:11:53.484 "reset": true, 00:11:53.484 "nvme_admin": false, 00:11:53.484 "nvme_io": false, 00:11:53.484 "nvme_io_md": false, 00:11:53.484 "write_zeroes": true, 00:11:53.484 "zcopy": true, 00:11:53.484 "get_zone_info": false, 00:11:53.484 "zone_management": false, 00:11:53.484 "zone_append": false, 00:11:53.484 "compare": false, 00:11:53.484 "compare_and_write": false, 00:11:53.484 "abort": true, 00:11:53.484 "seek_hole": false, 00:11:53.484 "seek_data": false, 00:11:53.484 "copy": true, 00:11:53.484 "nvme_iov_md": false 00:11:53.484 }, 00:11:53.484 "memory_domains": [ 00:11:53.484 { 00:11:53.484 "dma_device_id": "system", 00:11:53.484 "dma_device_type": 1 00:11:53.484 }, 00:11:53.484 { 00:11:53.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.484 "dma_device_type": 2 00:11:53.484 } 00:11:53.484 ], 00:11:53.484 "driver_specific": {} 00:11:53.484 } 00:11:53.484 ] 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.484 "name": "Existed_Raid", 00:11:53.484 "uuid": "efa29498-8bd2-4e17-a818-949ce284443f", 00:11:53.484 "strip_size_kb": 64, 00:11:53.484 "state": "online", 00:11:53.484 "raid_level": "concat", 00:11:53.484 "superblock": false, 00:11:53.484 "num_base_bdevs": 3, 00:11:53.484 "num_base_bdevs_discovered": 3, 00:11:53.484 "num_base_bdevs_operational": 3, 00:11:53.484 "base_bdevs_list": [ 00:11:53.484 { 00:11:53.484 "name": "BaseBdev1", 00:11:53.484 "uuid": "7cebb732-812d-4536-94cb-dc972fdbe2d6", 00:11:53.484 "is_configured": true, 00:11:53.484 "data_offset": 0, 00:11:53.484 "data_size": 65536 00:11:53.484 }, 00:11:53.484 { 00:11:53.484 "name": "BaseBdev2", 00:11:53.484 "uuid": "c7a06673-759d-43c2-9176-e30beba9666d", 00:11:53.484 "is_configured": true, 00:11:53.484 "data_offset": 0, 00:11:53.484 "data_size": 65536 00:11:53.484 }, 00:11:53.484 { 00:11:53.484 "name": "BaseBdev3", 00:11:53.484 "uuid": "b45a6c26-5264-4ab3-9c8a-645f3547cca4", 00:11:53.484 "is_configured": true, 00:11:53.484 "data_offset": 0, 00:11:53.484 "data_size": 65536 00:11:53.484 } 00:11:53.484 ] 00:11:53.484 }' 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.484 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.051 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:54.051 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:54.051 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:54.051 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:54.051 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:54.051 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:54.051 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:54.051 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:54.051 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.051 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.051 [2024-11-27 08:43:50.622941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.051 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.051 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:54.051 "name": "Existed_Raid", 00:11:54.051 "aliases": [ 00:11:54.051 "efa29498-8bd2-4e17-a818-949ce284443f" 00:11:54.051 ], 00:11:54.051 "product_name": "Raid Volume", 00:11:54.051 "block_size": 512, 00:11:54.051 "num_blocks": 196608, 00:11:54.051 "uuid": "efa29498-8bd2-4e17-a818-949ce284443f", 00:11:54.051 "assigned_rate_limits": { 00:11:54.051 "rw_ios_per_sec": 0, 00:11:54.051 "rw_mbytes_per_sec": 0, 00:11:54.051 "r_mbytes_per_sec": 0, 00:11:54.051 "w_mbytes_per_sec": 0 00:11:54.051 }, 00:11:54.051 "claimed": false, 00:11:54.051 "zoned": false, 00:11:54.051 "supported_io_types": { 00:11:54.051 "read": true, 00:11:54.051 "write": true, 00:11:54.051 "unmap": true, 00:11:54.051 "flush": true, 00:11:54.051 "reset": true, 00:11:54.051 "nvme_admin": false, 00:11:54.051 "nvme_io": false, 00:11:54.051 "nvme_io_md": false, 00:11:54.051 "write_zeroes": true, 00:11:54.051 "zcopy": false, 00:11:54.051 "get_zone_info": false, 00:11:54.051 "zone_management": false, 00:11:54.051 "zone_append": false, 00:11:54.051 "compare": false, 00:11:54.051 "compare_and_write": false, 00:11:54.051 "abort": false, 00:11:54.051 "seek_hole": false, 00:11:54.051 "seek_data": false, 00:11:54.051 "copy": false, 00:11:54.051 "nvme_iov_md": false 00:11:54.051 }, 00:11:54.051 "memory_domains": [ 00:11:54.051 { 00:11:54.051 "dma_device_id": "system", 00:11:54.051 "dma_device_type": 1 00:11:54.051 }, 00:11:54.051 { 00:11:54.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.051 "dma_device_type": 2 00:11:54.051 }, 00:11:54.051 { 00:11:54.051 "dma_device_id": "system", 00:11:54.051 "dma_device_type": 1 00:11:54.051 }, 00:11:54.051 { 00:11:54.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.051 "dma_device_type": 2 00:11:54.051 }, 00:11:54.051 { 00:11:54.051 "dma_device_id": "system", 00:11:54.051 "dma_device_type": 1 00:11:54.051 }, 00:11:54.051 { 00:11:54.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.051 "dma_device_type": 2 00:11:54.051 } 00:11:54.051 ], 00:11:54.051 "driver_specific": { 00:11:54.051 "raid": { 00:11:54.051 "uuid": "efa29498-8bd2-4e17-a818-949ce284443f", 00:11:54.051 "strip_size_kb": 64, 00:11:54.051 "state": "online", 00:11:54.051 "raid_level": "concat", 00:11:54.051 "superblock": false, 00:11:54.051 "num_base_bdevs": 3, 00:11:54.051 "num_base_bdevs_discovered": 3, 00:11:54.051 "num_base_bdevs_operational": 3, 00:11:54.051 "base_bdevs_list": [ 00:11:54.051 { 00:11:54.051 "name": "BaseBdev1", 00:11:54.051 "uuid": "7cebb732-812d-4536-94cb-dc972fdbe2d6", 00:11:54.052 "is_configured": true, 00:11:54.052 "data_offset": 0, 00:11:54.052 "data_size": 65536 00:11:54.052 }, 00:11:54.052 { 00:11:54.052 "name": "BaseBdev2", 00:11:54.052 "uuid": "c7a06673-759d-43c2-9176-e30beba9666d", 00:11:54.052 "is_configured": true, 00:11:54.052 "data_offset": 0, 00:11:54.052 "data_size": 65536 00:11:54.052 }, 00:11:54.052 { 00:11:54.052 "name": "BaseBdev3", 00:11:54.052 "uuid": "b45a6c26-5264-4ab3-9c8a-645f3547cca4", 00:11:54.052 "is_configured": true, 00:11:54.052 "data_offset": 0, 00:11:54.052 "data_size": 65536 00:11:54.052 } 00:11:54.052 ] 00:11:54.052 } 00:11:54.052 } 00:11:54.052 }' 00:11:54.052 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.052 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:54.052 BaseBdev2 00:11:54.052 BaseBdev3' 00:11:54.052 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.052 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:54.052 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.052 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:54.052 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.052 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.052 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.052 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.311 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.311 [2024-11-27 08:43:50.938627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:54.311 [2024-11-27 08:43:50.938669] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.311 [2024-11-27 08:43:50.938749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.311 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.570 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.570 "name": "Existed_Raid", 00:11:54.570 "uuid": "efa29498-8bd2-4e17-a818-949ce284443f", 00:11:54.570 "strip_size_kb": 64, 00:11:54.570 "state": "offline", 00:11:54.570 "raid_level": "concat", 00:11:54.570 "superblock": false, 00:11:54.570 "num_base_bdevs": 3, 00:11:54.570 "num_base_bdevs_discovered": 2, 00:11:54.570 "num_base_bdevs_operational": 2, 00:11:54.570 "base_bdevs_list": [ 00:11:54.570 { 00:11:54.570 "name": null, 00:11:54.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.570 "is_configured": false, 00:11:54.570 "data_offset": 0, 00:11:54.570 "data_size": 65536 00:11:54.570 }, 00:11:54.570 { 00:11:54.570 "name": "BaseBdev2", 00:11:54.570 "uuid": "c7a06673-759d-43c2-9176-e30beba9666d", 00:11:54.570 "is_configured": true, 00:11:54.570 "data_offset": 0, 00:11:54.570 "data_size": 65536 00:11:54.570 }, 00:11:54.570 { 00:11:54.570 "name": "BaseBdev3", 00:11:54.570 "uuid": "b45a6c26-5264-4ab3-9c8a-645f3547cca4", 00:11:54.570 "is_configured": true, 00:11:54.570 "data_offset": 0, 00:11:54.570 "data_size": 65536 00:11:54.570 } 00:11:54.570 ] 00:11:54.570 }' 00:11:54.570 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.570 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.828 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:54.828 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.828 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:54.828 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.828 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.828 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.828 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.829 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:54.829 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:54.829 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:54.829 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.829 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.829 [2024-11-27 08:43:51.568259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.087 [2024-11-27 08:43:51.716392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:55.087 [2024-11-27 08:43:51.716485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.087 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.346 BaseBdev2 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.346 [ 00:11:55.346 { 00:11:55.346 "name": "BaseBdev2", 00:11:55.346 "aliases": [ 00:11:55.346 "f51be02a-0a6d-4254-ab97-3bb2f1f36fe2" 00:11:55.346 ], 00:11:55.346 "product_name": "Malloc disk", 00:11:55.346 "block_size": 512, 00:11:55.346 "num_blocks": 65536, 00:11:55.346 "uuid": "f51be02a-0a6d-4254-ab97-3bb2f1f36fe2", 00:11:55.346 "assigned_rate_limits": { 00:11:55.346 "rw_ios_per_sec": 0, 00:11:55.346 "rw_mbytes_per_sec": 0, 00:11:55.346 "r_mbytes_per_sec": 0, 00:11:55.346 "w_mbytes_per_sec": 0 00:11:55.346 }, 00:11:55.346 "claimed": false, 00:11:55.346 "zoned": false, 00:11:55.346 "supported_io_types": { 00:11:55.346 "read": true, 00:11:55.346 "write": true, 00:11:55.346 "unmap": true, 00:11:55.346 "flush": true, 00:11:55.346 "reset": true, 00:11:55.346 "nvme_admin": false, 00:11:55.346 "nvme_io": false, 00:11:55.346 "nvme_io_md": false, 00:11:55.346 "write_zeroes": true, 00:11:55.346 "zcopy": true, 00:11:55.346 "get_zone_info": false, 00:11:55.346 "zone_management": false, 00:11:55.346 "zone_append": false, 00:11:55.346 "compare": false, 00:11:55.346 "compare_and_write": false, 00:11:55.346 "abort": true, 00:11:55.346 "seek_hole": false, 00:11:55.346 "seek_data": false, 00:11:55.346 "copy": true, 00:11:55.346 "nvme_iov_md": false 00:11:55.346 }, 00:11:55.346 "memory_domains": [ 00:11:55.346 { 00:11:55.346 "dma_device_id": "system", 00:11:55.346 "dma_device_type": 1 00:11:55.346 }, 00:11:55.346 { 00:11:55.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.346 "dma_device_type": 2 00:11:55.346 } 00:11:55.346 ], 00:11:55.346 "driver_specific": {} 00:11:55.346 } 00:11:55.346 ] 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.346 BaseBdev3 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:55.346 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.347 08:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.347 [ 00:11:55.347 { 00:11:55.347 "name": "BaseBdev3", 00:11:55.347 "aliases": [ 00:11:55.347 "b9bb2d21-2ee5-4ee4-9447-2fbd271ed7ec" 00:11:55.347 ], 00:11:55.347 "product_name": "Malloc disk", 00:11:55.347 "block_size": 512, 00:11:55.347 "num_blocks": 65536, 00:11:55.347 "uuid": "b9bb2d21-2ee5-4ee4-9447-2fbd271ed7ec", 00:11:55.347 "assigned_rate_limits": { 00:11:55.347 "rw_ios_per_sec": 0, 00:11:55.347 "rw_mbytes_per_sec": 0, 00:11:55.347 "r_mbytes_per_sec": 0, 00:11:55.347 "w_mbytes_per_sec": 0 00:11:55.347 }, 00:11:55.347 "claimed": false, 00:11:55.347 "zoned": false, 00:11:55.347 "supported_io_types": { 00:11:55.347 "read": true, 00:11:55.347 "write": true, 00:11:55.347 "unmap": true, 00:11:55.347 "flush": true, 00:11:55.347 "reset": true, 00:11:55.347 "nvme_admin": false, 00:11:55.347 "nvme_io": false, 00:11:55.347 "nvme_io_md": false, 00:11:55.347 "write_zeroes": true, 00:11:55.347 "zcopy": true, 00:11:55.347 "get_zone_info": false, 00:11:55.347 "zone_management": false, 00:11:55.347 "zone_append": false, 00:11:55.347 "compare": false, 00:11:55.347 "compare_and_write": false, 00:11:55.347 "abort": true, 00:11:55.347 "seek_hole": false, 00:11:55.347 "seek_data": false, 00:11:55.347 "copy": true, 00:11:55.347 "nvme_iov_md": false 00:11:55.347 }, 00:11:55.347 "memory_domains": [ 00:11:55.347 { 00:11:55.347 "dma_device_id": "system", 00:11:55.347 "dma_device_type": 1 00:11:55.347 }, 00:11:55.347 { 00:11:55.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.347 "dma_device_type": 2 00:11:55.347 } 00:11:55.347 ], 00:11:55.347 "driver_specific": {} 00:11:55.347 } 00:11:55.347 ] 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.347 [2024-11-27 08:43:52.020420] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.347 [2024-11-27 08:43:52.020728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.347 [2024-11-27 08:43:52.020884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.347 [2024-11-27 08:43:52.023722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.347 "name": "Existed_Raid", 00:11:55.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.347 "strip_size_kb": 64, 00:11:55.347 "state": "configuring", 00:11:55.347 "raid_level": "concat", 00:11:55.347 "superblock": false, 00:11:55.347 "num_base_bdevs": 3, 00:11:55.347 "num_base_bdevs_discovered": 2, 00:11:55.347 "num_base_bdevs_operational": 3, 00:11:55.347 "base_bdevs_list": [ 00:11:55.347 { 00:11:55.347 "name": "BaseBdev1", 00:11:55.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.347 "is_configured": false, 00:11:55.347 "data_offset": 0, 00:11:55.347 "data_size": 0 00:11:55.347 }, 00:11:55.347 { 00:11:55.347 "name": "BaseBdev2", 00:11:55.347 "uuid": "f51be02a-0a6d-4254-ab97-3bb2f1f36fe2", 00:11:55.347 "is_configured": true, 00:11:55.347 "data_offset": 0, 00:11:55.347 "data_size": 65536 00:11:55.347 }, 00:11:55.347 { 00:11:55.347 "name": "BaseBdev3", 00:11:55.347 "uuid": "b9bb2d21-2ee5-4ee4-9447-2fbd271ed7ec", 00:11:55.347 "is_configured": true, 00:11:55.347 "data_offset": 0, 00:11:55.347 "data_size": 65536 00:11:55.347 } 00:11:55.347 ] 00:11:55.347 }' 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.347 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.915 [2024-11-27 08:43:52.524575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.915 "name": "Existed_Raid", 00:11:55.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.915 "strip_size_kb": 64, 00:11:55.915 "state": "configuring", 00:11:55.915 "raid_level": "concat", 00:11:55.915 "superblock": false, 00:11:55.915 "num_base_bdevs": 3, 00:11:55.915 "num_base_bdevs_discovered": 1, 00:11:55.915 "num_base_bdevs_operational": 3, 00:11:55.915 "base_bdevs_list": [ 00:11:55.915 { 00:11:55.915 "name": "BaseBdev1", 00:11:55.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.915 "is_configured": false, 00:11:55.915 "data_offset": 0, 00:11:55.915 "data_size": 0 00:11:55.915 }, 00:11:55.915 { 00:11:55.915 "name": null, 00:11:55.915 "uuid": "f51be02a-0a6d-4254-ab97-3bb2f1f36fe2", 00:11:55.915 "is_configured": false, 00:11:55.915 "data_offset": 0, 00:11:55.915 "data_size": 65536 00:11:55.915 }, 00:11:55.915 { 00:11:55.915 "name": "BaseBdev3", 00:11:55.915 "uuid": "b9bb2d21-2ee5-4ee4-9447-2fbd271ed7ec", 00:11:55.915 "is_configured": true, 00:11:55.915 "data_offset": 0, 00:11:55.915 "data_size": 65536 00:11:55.915 } 00:11:55.915 ] 00:11:55.915 }' 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.915 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.503 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.503 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.503 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.503 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:56.503 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.503 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:56.503 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:56.503 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.503 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.504 [2024-11-27 08:43:53.118222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.504 BaseBdev1 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.504 [ 00:11:56.504 { 00:11:56.504 "name": "BaseBdev1", 00:11:56.504 "aliases": [ 00:11:56.504 "7bc3c2c7-41d2-44c4-89a6-45f25cc9c229" 00:11:56.504 ], 00:11:56.504 "product_name": "Malloc disk", 00:11:56.504 "block_size": 512, 00:11:56.504 "num_blocks": 65536, 00:11:56.504 "uuid": "7bc3c2c7-41d2-44c4-89a6-45f25cc9c229", 00:11:56.504 "assigned_rate_limits": { 00:11:56.504 "rw_ios_per_sec": 0, 00:11:56.504 "rw_mbytes_per_sec": 0, 00:11:56.504 "r_mbytes_per_sec": 0, 00:11:56.504 "w_mbytes_per_sec": 0 00:11:56.504 }, 00:11:56.504 "claimed": true, 00:11:56.504 "claim_type": "exclusive_write", 00:11:56.504 "zoned": false, 00:11:56.504 "supported_io_types": { 00:11:56.504 "read": true, 00:11:56.504 "write": true, 00:11:56.504 "unmap": true, 00:11:56.504 "flush": true, 00:11:56.504 "reset": true, 00:11:56.504 "nvme_admin": false, 00:11:56.504 "nvme_io": false, 00:11:56.504 "nvme_io_md": false, 00:11:56.504 "write_zeroes": true, 00:11:56.504 "zcopy": true, 00:11:56.504 "get_zone_info": false, 00:11:56.504 "zone_management": false, 00:11:56.504 "zone_append": false, 00:11:56.504 "compare": false, 00:11:56.504 "compare_and_write": false, 00:11:56.504 "abort": true, 00:11:56.504 "seek_hole": false, 00:11:56.504 "seek_data": false, 00:11:56.504 "copy": true, 00:11:56.504 "nvme_iov_md": false 00:11:56.504 }, 00:11:56.504 "memory_domains": [ 00:11:56.504 { 00:11:56.504 "dma_device_id": "system", 00:11:56.504 "dma_device_type": 1 00:11:56.504 }, 00:11:56.504 { 00:11:56.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.504 "dma_device_type": 2 00:11:56.504 } 00:11:56.504 ], 00:11:56.504 "driver_specific": {} 00:11:56.504 } 00:11:56.504 ] 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.504 "name": "Existed_Raid", 00:11:56.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.504 "strip_size_kb": 64, 00:11:56.504 "state": "configuring", 00:11:56.504 "raid_level": "concat", 00:11:56.504 "superblock": false, 00:11:56.504 "num_base_bdevs": 3, 00:11:56.504 "num_base_bdevs_discovered": 2, 00:11:56.504 "num_base_bdevs_operational": 3, 00:11:56.504 "base_bdevs_list": [ 00:11:56.504 { 00:11:56.504 "name": "BaseBdev1", 00:11:56.504 "uuid": "7bc3c2c7-41d2-44c4-89a6-45f25cc9c229", 00:11:56.504 "is_configured": true, 00:11:56.504 "data_offset": 0, 00:11:56.504 "data_size": 65536 00:11:56.504 }, 00:11:56.504 { 00:11:56.504 "name": null, 00:11:56.504 "uuid": "f51be02a-0a6d-4254-ab97-3bb2f1f36fe2", 00:11:56.504 "is_configured": false, 00:11:56.504 "data_offset": 0, 00:11:56.504 "data_size": 65536 00:11:56.504 }, 00:11:56.504 { 00:11:56.504 "name": "BaseBdev3", 00:11:56.504 "uuid": "b9bb2d21-2ee5-4ee4-9447-2fbd271ed7ec", 00:11:56.504 "is_configured": true, 00:11:56.504 "data_offset": 0, 00:11:56.504 "data_size": 65536 00:11:56.504 } 00:11:56.504 ] 00:11:56.504 }' 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.504 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.072 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.072 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:57.072 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.072 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.072 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.072 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:57.072 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:57.072 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.072 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.073 [2024-11-27 08:43:53.726507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.073 "name": "Existed_Raid", 00:11:57.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.073 "strip_size_kb": 64, 00:11:57.073 "state": "configuring", 00:11:57.073 "raid_level": "concat", 00:11:57.073 "superblock": false, 00:11:57.073 "num_base_bdevs": 3, 00:11:57.073 "num_base_bdevs_discovered": 1, 00:11:57.073 "num_base_bdevs_operational": 3, 00:11:57.073 "base_bdevs_list": [ 00:11:57.073 { 00:11:57.073 "name": "BaseBdev1", 00:11:57.073 "uuid": "7bc3c2c7-41d2-44c4-89a6-45f25cc9c229", 00:11:57.073 "is_configured": true, 00:11:57.073 "data_offset": 0, 00:11:57.073 "data_size": 65536 00:11:57.073 }, 00:11:57.073 { 00:11:57.073 "name": null, 00:11:57.073 "uuid": "f51be02a-0a6d-4254-ab97-3bb2f1f36fe2", 00:11:57.073 "is_configured": false, 00:11:57.073 "data_offset": 0, 00:11:57.073 "data_size": 65536 00:11:57.073 }, 00:11:57.073 { 00:11:57.073 "name": null, 00:11:57.073 "uuid": "b9bb2d21-2ee5-4ee4-9447-2fbd271ed7ec", 00:11:57.073 "is_configured": false, 00:11:57.073 "data_offset": 0, 00:11:57.073 "data_size": 65536 00:11:57.073 } 00:11:57.073 ] 00:11:57.073 }' 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.073 08:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.639 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.640 [2024-11-27 08:43:54.278692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.640 "name": "Existed_Raid", 00:11:57.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.640 "strip_size_kb": 64, 00:11:57.640 "state": "configuring", 00:11:57.640 "raid_level": "concat", 00:11:57.640 "superblock": false, 00:11:57.640 "num_base_bdevs": 3, 00:11:57.640 "num_base_bdevs_discovered": 2, 00:11:57.640 "num_base_bdevs_operational": 3, 00:11:57.640 "base_bdevs_list": [ 00:11:57.640 { 00:11:57.640 "name": "BaseBdev1", 00:11:57.640 "uuid": "7bc3c2c7-41d2-44c4-89a6-45f25cc9c229", 00:11:57.640 "is_configured": true, 00:11:57.640 "data_offset": 0, 00:11:57.640 "data_size": 65536 00:11:57.640 }, 00:11:57.640 { 00:11:57.640 "name": null, 00:11:57.640 "uuid": "f51be02a-0a6d-4254-ab97-3bb2f1f36fe2", 00:11:57.640 "is_configured": false, 00:11:57.640 "data_offset": 0, 00:11:57.640 "data_size": 65536 00:11:57.640 }, 00:11:57.640 { 00:11:57.640 "name": "BaseBdev3", 00:11:57.640 "uuid": "b9bb2d21-2ee5-4ee4-9447-2fbd271ed7ec", 00:11:57.640 "is_configured": true, 00:11:57.640 "data_offset": 0, 00:11:57.640 "data_size": 65536 00:11:57.640 } 00:11:57.640 ] 00:11:57.640 }' 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.640 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.207 [2024-11-27 08:43:54.826883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.207 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.466 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.466 "name": "Existed_Raid", 00:11:58.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.466 "strip_size_kb": 64, 00:11:58.466 "state": "configuring", 00:11:58.466 "raid_level": "concat", 00:11:58.466 "superblock": false, 00:11:58.466 "num_base_bdevs": 3, 00:11:58.466 "num_base_bdevs_discovered": 1, 00:11:58.466 "num_base_bdevs_operational": 3, 00:11:58.466 "base_bdevs_list": [ 00:11:58.466 { 00:11:58.466 "name": null, 00:11:58.466 "uuid": "7bc3c2c7-41d2-44c4-89a6-45f25cc9c229", 00:11:58.466 "is_configured": false, 00:11:58.466 "data_offset": 0, 00:11:58.466 "data_size": 65536 00:11:58.466 }, 00:11:58.466 { 00:11:58.466 "name": null, 00:11:58.466 "uuid": "f51be02a-0a6d-4254-ab97-3bb2f1f36fe2", 00:11:58.466 "is_configured": false, 00:11:58.466 "data_offset": 0, 00:11:58.466 "data_size": 65536 00:11:58.466 }, 00:11:58.466 { 00:11:58.466 "name": "BaseBdev3", 00:11:58.466 "uuid": "b9bb2d21-2ee5-4ee4-9447-2fbd271ed7ec", 00:11:58.466 "is_configured": true, 00:11:58.466 "data_offset": 0, 00:11:58.466 "data_size": 65536 00:11:58.466 } 00:11:58.466 ] 00:11:58.466 }' 00:11:58.466 08:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.466 08:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.724 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.724 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:58.724 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.724 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.724 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.983 [2024-11-27 08:43:55.508826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.983 "name": "Existed_Raid", 00:11:58.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.983 "strip_size_kb": 64, 00:11:58.983 "state": "configuring", 00:11:58.983 "raid_level": "concat", 00:11:58.983 "superblock": false, 00:11:58.983 "num_base_bdevs": 3, 00:11:58.983 "num_base_bdevs_discovered": 2, 00:11:58.983 "num_base_bdevs_operational": 3, 00:11:58.983 "base_bdevs_list": [ 00:11:58.983 { 00:11:58.983 "name": null, 00:11:58.983 "uuid": "7bc3c2c7-41d2-44c4-89a6-45f25cc9c229", 00:11:58.983 "is_configured": false, 00:11:58.983 "data_offset": 0, 00:11:58.983 "data_size": 65536 00:11:58.983 }, 00:11:58.983 { 00:11:58.983 "name": "BaseBdev2", 00:11:58.983 "uuid": "f51be02a-0a6d-4254-ab97-3bb2f1f36fe2", 00:11:58.983 "is_configured": true, 00:11:58.983 "data_offset": 0, 00:11:58.983 "data_size": 65536 00:11:58.983 }, 00:11:58.983 { 00:11:58.983 "name": "BaseBdev3", 00:11:58.983 "uuid": "b9bb2d21-2ee5-4ee4-9447-2fbd271ed7ec", 00:11:58.983 "is_configured": true, 00:11:58.983 "data_offset": 0, 00:11:58.983 "data_size": 65536 00:11:58.983 } 00:11:58.983 ] 00:11:58.983 }' 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.983 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7bc3c2c7-41d2-44c4-89a6-45f25cc9c229 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.550 [2024-11-27 08:43:56.187840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:59.550 [2024-11-27 08:43:56.188288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:59.550 [2024-11-27 08:43:56.188321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:59.550 [2024-11-27 08:43:56.188707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:59.550 [2024-11-27 08:43:56.188952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:59.550 [2024-11-27 08:43:56.188969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:59.550 [2024-11-27 08:43:56.189334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.550 NewBaseBdev 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.550 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.550 [ 00:11:59.550 { 00:11:59.550 "name": "NewBaseBdev", 00:11:59.550 "aliases": [ 00:11:59.550 "7bc3c2c7-41d2-44c4-89a6-45f25cc9c229" 00:11:59.550 ], 00:11:59.550 "product_name": "Malloc disk", 00:11:59.550 "block_size": 512, 00:11:59.550 "num_blocks": 65536, 00:11:59.550 "uuid": "7bc3c2c7-41d2-44c4-89a6-45f25cc9c229", 00:11:59.550 "assigned_rate_limits": { 00:11:59.551 "rw_ios_per_sec": 0, 00:11:59.551 "rw_mbytes_per_sec": 0, 00:11:59.551 "r_mbytes_per_sec": 0, 00:11:59.551 "w_mbytes_per_sec": 0 00:11:59.551 }, 00:11:59.551 "claimed": true, 00:11:59.551 "claim_type": "exclusive_write", 00:11:59.551 "zoned": false, 00:11:59.551 "supported_io_types": { 00:11:59.551 "read": true, 00:11:59.551 "write": true, 00:11:59.551 "unmap": true, 00:11:59.551 "flush": true, 00:11:59.551 "reset": true, 00:11:59.551 "nvme_admin": false, 00:11:59.551 "nvme_io": false, 00:11:59.551 "nvme_io_md": false, 00:11:59.551 "write_zeroes": true, 00:11:59.551 "zcopy": true, 00:11:59.551 "get_zone_info": false, 00:11:59.551 "zone_management": false, 00:11:59.551 "zone_append": false, 00:11:59.551 "compare": false, 00:11:59.551 "compare_and_write": false, 00:11:59.551 "abort": true, 00:11:59.551 "seek_hole": false, 00:11:59.551 "seek_data": false, 00:11:59.551 "copy": true, 00:11:59.551 "nvme_iov_md": false 00:11:59.551 }, 00:11:59.551 "memory_domains": [ 00:11:59.551 { 00:11:59.551 "dma_device_id": "system", 00:11:59.551 "dma_device_type": 1 00:11:59.551 }, 00:11:59.551 { 00:11:59.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.551 "dma_device_type": 2 00:11:59.551 } 00:11:59.551 ], 00:11:59.551 "driver_specific": {} 00:11:59.551 } 00:11:59.551 ] 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.551 "name": "Existed_Raid", 00:11:59.551 "uuid": "418d8609-36ba-47cc-8767-36dad806ada6", 00:11:59.551 "strip_size_kb": 64, 00:11:59.551 "state": "online", 00:11:59.551 "raid_level": "concat", 00:11:59.551 "superblock": false, 00:11:59.551 "num_base_bdevs": 3, 00:11:59.551 "num_base_bdevs_discovered": 3, 00:11:59.551 "num_base_bdevs_operational": 3, 00:11:59.551 "base_bdevs_list": [ 00:11:59.551 { 00:11:59.551 "name": "NewBaseBdev", 00:11:59.551 "uuid": "7bc3c2c7-41d2-44c4-89a6-45f25cc9c229", 00:11:59.551 "is_configured": true, 00:11:59.551 "data_offset": 0, 00:11:59.551 "data_size": 65536 00:11:59.551 }, 00:11:59.551 { 00:11:59.551 "name": "BaseBdev2", 00:11:59.551 "uuid": "f51be02a-0a6d-4254-ab97-3bb2f1f36fe2", 00:11:59.551 "is_configured": true, 00:11:59.551 "data_offset": 0, 00:11:59.551 "data_size": 65536 00:11:59.551 }, 00:11:59.551 { 00:11:59.551 "name": "BaseBdev3", 00:11:59.551 "uuid": "b9bb2d21-2ee5-4ee4-9447-2fbd271ed7ec", 00:11:59.551 "is_configured": true, 00:11:59.551 "data_offset": 0, 00:11:59.551 "data_size": 65536 00:11:59.551 } 00:11:59.551 ] 00:11:59.551 }' 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.551 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:00.119 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:00.119 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:00.119 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:00.119 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:00.119 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:00.119 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:00.119 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:00.119 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.119 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 [2024-11-27 08:43:56.756512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.119 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.119 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:00.119 "name": "Existed_Raid", 00:12:00.119 "aliases": [ 00:12:00.119 "418d8609-36ba-47cc-8767-36dad806ada6" 00:12:00.119 ], 00:12:00.119 "product_name": "Raid Volume", 00:12:00.119 "block_size": 512, 00:12:00.119 "num_blocks": 196608, 00:12:00.119 "uuid": "418d8609-36ba-47cc-8767-36dad806ada6", 00:12:00.119 "assigned_rate_limits": { 00:12:00.119 "rw_ios_per_sec": 0, 00:12:00.119 "rw_mbytes_per_sec": 0, 00:12:00.119 "r_mbytes_per_sec": 0, 00:12:00.119 "w_mbytes_per_sec": 0 00:12:00.119 }, 00:12:00.119 "claimed": false, 00:12:00.119 "zoned": false, 00:12:00.119 "supported_io_types": { 00:12:00.119 "read": true, 00:12:00.119 "write": true, 00:12:00.119 "unmap": true, 00:12:00.119 "flush": true, 00:12:00.119 "reset": true, 00:12:00.119 "nvme_admin": false, 00:12:00.119 "nvme_io": false, 00:12:00.119 "nvme_io_md": false, 00:12:00.119 "write_zeroes": true, 00:12:00.119 "zcopy": false, 00:12:00.119 "get_zone_info": false, 00:12:00.119 "zone_management": false, 00:12:00.119 "zone_append": false, 00:12:00.119 "compare": false, 00:12:00.119 "compare_and_write": false, 00:12:00.119 "abort": false, 00:12:00.119 "seek_hole": false, 00:12:00.119 "seek_data": false, 00:12:00.119 "copy": false, 00:12:00.119 "nvme_iov_md": false 00:12:00.119 }, 00:12:00.119 "memory_domains": [ 00:12:00.119 { 00:12:00.119 "dma_device_id": "system", 00:12:00.119 "dma_device_type": 1 00:12:00.119 }, 00:12:00.119 { 00:12:00.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.119 "dma_device_type": 2 00:12:00.119 }, 00:12:00.119 { 00:12:00.119 "dma_device_id": "system", 00:12:00.119 "dma_device_type": 1 00:12:00.119 }, 00:12:00.119 { 00:12:00.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.119 "dma_device_type": 2 00:12:00.119 }, 00:12:00.119 { 00:12:00.119 "dma_device_id": "system", 00:12:00.119 "dma_device_type": 1 00:12:00.119 }, 00:12:00.119 { 00:12:00.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.119 "dma_device_type": 2 00:12:00.119 } 00:12:00.119 ], 00:12:00.119 "driver_specific": { 00:12:00.119 "raid": { 00:12:00.119 "uuid": "418d8609-36ba-47cc-8767-36dad806ada6", 00:12:00.119 "strip_size_kb": 64, 00:12:00.119 "state": "online", 00:12:00.119 "raid_level": "concat", 00:12:00.119 "superblock": false, 00:12:00.119 "num_base_bdevs": 3, 00:12:00.119 "num_base_bdevs_discovered": 3, 00:12:00.119 "num_base_bdevs_operational": 3, 00:12:00.119 "base_bdevs_list": [ 00:12:00.120 { 00:12:00.120 "name": "NewBaseBdev", 00:12:00.120 "uuid": "7bc3c2c7-41d2-44c4-89a6-45f25cc9c229", 00:12:00.120 "is_configured": true, 00:12:00.120 "data_offset": 0, 00:12:00.120 "data_size": 65536 00:12:00.120 }, 00:12:00.120 { 00:12:00.120 "name": "BaseBdev2", 00:12:00.120 "uuid": "f51be02a-0a6d-4254-ab97-3bb2f1f36fe2", 00:12:00.120 "is_configured": true, 00:12:00.120 "data_offset": 0, 00:12:00.120 "data_size": 65536 00:12:00.120 }, 00:12:00.120 { 00:12:00.120 "name": "BaseBdev3", 00:12:00.120 "uuid": "b9bb2d21-2ee5-4ee4-9447-2fbd271ed7ec", 00:12:00.120 "is_configured": true, 00:12:00.120 "data_offset": 0, 00:12:00.120 "data_size": 65536 00:12:00.120 } 00:12:00.120 ] 00:12:00.120 } 00:12:00.120 } 00:12:00.120 }' 00:12:00.120 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:00.120 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:00.120 BaseBdev2 00:12:00.120 BaseBdev3' 00:12:00.120 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.379 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:00.379 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.379 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:00.379 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.379 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.379 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.379 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.379 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.379 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.379 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.379 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:00.379 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.379 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.379 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.379 [2024-11-27 08:43:57.104252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:00.379 [2024-11-27 08:43:57.104297] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.379 [2024-11-27 08:43:57.104477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.379 [2024-11-27 08:43:57.104576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.379 [2024-11-27 08:43:57.104599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65747 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' -z 65747 ']' 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # kill -0 65747 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # uname 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:12:00.379 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 65747 00:12:00.637 killing process with pid 65747 00:12:00.637 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:12:00.637 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:12:00.637 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 65747' 00:12:00.637 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # kill 65747 00:12:00.637 [2024-11-27 08:43:57.145032] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.637 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@975 -- # wait 65747 00:12:00.895 [2024-11-27 08:43:57.444505] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:01.874 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:01.874 00:12:01.874 real 0m11.925s 00:12:01.874 user 0m19.462s 00:12:01.874 sys 0m1.763s 00:12:01.874 ************************************ 00:12:01.874 END TEST raid_state_function_test 00:12:01.874 ************************************ 00:12:01.874 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:12:01.874 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.874 08:43:58 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:01.874 08:43:58 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:12:01.874 08:43:58 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:12:01.874 08:43:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.133 ************************************ 00:12:02.133 START TEST raid_state_function_test_sb 00:12:02.133 ************************************ 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # raid_state_function_test concat 3 true 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:02.133 Process raid pid: 66385 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66385 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66385' 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66385 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # '[' -z 66385 ']' 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local max_retries=100 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # xtrace_disable 00:12:02.133 08:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.133 [2024-11-27 08:43:58.764822] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:12:02.133 [2024-11-27 08:43:58.765038] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.391 [2024-11-27 08:43:58.956183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.391 [2024-11-27 08:43:59.109052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.648 [2024-11-27 08:43:59.340731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.648 [2024-11-27 08:43:59.340809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.214 08:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:12:03.214 08:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@865 -- # return 0 00:12:03.214 08:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.215 [2024-11-27 08:43:59.768531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.215 [2024-11-27 08:43:59.768609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.215 [2024-11-27 08:43:59.768629] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.215 [2024-11-27 08:43:59.768647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.215 [2024-11-27 08:43:59.768658] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.215 [2024-11-27 08:43:59.768673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.215 "name": "Existed_Raid", 00:12:03.215 "uuid": "10272c96-5e30-43a4-852f-abec5a892123", 00:12:03.215 "strip_size_kb": 64, 00:12:03.215 "state": "configuring", 00:12:03.215 "raid_level": "concat", 00:12:03.215 "superblock": true, 00:12:03.215 "num_base_bdevs": 3, 00:12:03.215 "num_base_bdevs_discovered": 0, 00:12:03.215 "num_base_bdevs_operational": 3, 00:12:03.215 "base_bdevs_list": [ 00:12:03.215 { 00:12:03.215 "name": "BaseBdev1", 00:12:03.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.215 "is_configured": false, 00:12:03.215 "data_offset": 0, 00:12:03.215 "data_size": 0 00:12:03.215 }, 00:12:03.215 { 00:12:03.215 "name": "BaseBdev2", 00:12:03.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.215 "is_configured": false, 00:12:03.215 "data_offset": 0, 00:12:03.215 "data_size": 0 00:12:03.215 }, 00:12:03.215 { 00:12:03.215 "name": "BaseBdev3", 00:12:03.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.215 "is_configured": false, 00:12:03.215 "data_offset": 0, 00:12:03.215 "data_size": 0 00:12:03.215 } 00:12:03.215 ] 00:12:03.215 }' 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.215 08:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.472 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.472 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.472 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.472 [2024-11-27 08:44:00.228643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.472 [2024-11-27 08:44:00.229291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.730 [2024-11-27 08:44:00.240715] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.730 [2024-11-27 08:44:00.240832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.730 [2024-11-27 08:44:00.240850] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.730 [2024-11-27 08:44:00.240868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.730 [2024-11-27 08:44:00.240878] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.730 [2024-11-27 08:44:00.240893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.730 [2024-11-27 08:44:00.296213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.730 BaseBdev1 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.730 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.730 [ 00:12:03.730 { 00:12:03.730 "name": "BaseBdev1", 00:12:03.730 "aliases": [ 00:12:03.730 "72a6944f-54f1-44ec-96b0-498da0ea8bc8" 00:12:03.730 ], 00:12:03.730 "product_name": "Malloc disk", 00:12:03.730 "block_size": 512, 00:12:03.730 "num_blocks": 65536, 00:12:03.730 "uuid": "72a6944f-54f1-44ec-96b0-498da0ea8bc8", 00:12:03.730 "assigned_rate_limits": { 00:12:03.730 "rw_ios_per_sec": 0, 00:12:03.730 "rw_mbytes_per_sec": 0, 00:12:03.730 "r_mbytes_per_sec": 0, 00:12:03.730 "w_mbytes_per_sec": 0 00:12:03.730 }, 00:12:03.730 "claimed": true, 00:12:03.730 "claim_type": "exclusive_write", 00:12:03.730 "zoned": false, 00:12:03.730 "supported_io_types": { 00:12:03.730 "read": true, 00:12:03.730 "write": true, 00:12:03.730 "unmap": true, 00:12:03.730 "flush": true, 00:12:03.730 "reset": true, 00:12:03.730 "nvme_admin": false, 00:12:03.730 "nvme_io": false, 00:12:03.730 "nvme_io_md": false, 00:12:03.730 "write_zeroes": true, 00:12:03.730 "zcopy": true, 00:12:03.730 "get_zone_info": false, 00:12:03.730 "zone_management": false, 00:12:03.730 "zone_append": false, 00:12:03.730 "compare": false, 00:12:03.730 "compare_and_write": false, 00:12:03.730 "abort": true, 00:12:03.730 "seek_hole": false, 00:12:03.730 "seek_data": false, 00:12:03.730 "copy": true, 00:12:03.730 "nvme_iov_md": false 00:12:03.730 }, 00:12:03.730 "memory_domains": [ 00:12:03.730 { 00:12:03.730 "dma_device_id": "system", 00:12:03.730 "dma_device_type": 1 00:12:03.730 }, 00:12:03.730 { 00:12:03.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.731 "dma_device_type": 2 00:12:03.731 } 00:12:03.731 ], 00:12:03.731 "driver_specific": {} 00:12:03.731 } 00:12:03.731 ] 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.731 "name": "Existed_Raid", 00:12:03.731 "uuid": "cf46ed85-1de6-482e-a502-64d986399a0d", 00:12:03.731 "strip_size_kb": 64, 00:12:03.731 "state": "configuring", 00:12:03.731 "raid_level": "concat", 00:12:03.731 "superblock": true, 00:12:03.731 "num_base_bdevs": 3, 00:12:03.731 "num_base_bdevs_discovered": 1, 00:12:03.731 "num_base_bdevs_operational": 3, 00:12:03.731 "base_bdevs_list": [ 00:12:03.731 { 00:12:03.731 "name": "BaseBdev1", 00:12:03.731 "uuid": "72a6944f-54f1-44ec-96b0-498da0ea8bc8", 00:12:03.731 "is_configured": true, 00:12:03.731 "data_offset": 2048, 00:12:03.731 "data_size": 63488 00:12:03.731 }, 00:12:03.731 { 00:12:03.731 "name": "BaseBdev2", 00:12:03.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.731 "is_configured": false, 00:12:03.731 "data_offset": 0, 00:12:03.731 "data_size": 0 00:12:03.731 }, 00:12:03.731 { 00:12:03.731 "name": "BaseBdev3", 00:12:03.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.731 "is_configured": false, 00:12:03.731 "data_offset": 0, 00:12:03.731 "data_size": 0 00:12:03.731 } 00:12:03.731 ] 00:12:03.731 }' 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.731 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.297 [2024-11-27 08:44:00.848470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:04.297 [2024-11-27 08:44:00.848804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.297 [2024-11-27 08:44:00.860566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:04.297 [2024-11-27 08:44:00.863594] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:04.297 [2024-11-27 08:44:00.863802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:04.297 [2024-11-27 08:44:00.863983] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:04.297 [2024-11-27 08:44:00.864161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.297 "name": "Existed_Raid", 00:12:04.297 "uuid": "c597edda-423b-4708-96ff-153a635c1ed2", 00:12:04.297 "strip_size_kb": 64, 00:12:04.297 "state": "configuring", 00:12:04.297 "raid_level": "concat", 00:12:04.297 "superblock": true, 00:12:04.297 "num_base_bdevs": 3, 00:12:04.297 "num_base_bdevs_discovered": 1, 00:12:04.297 "num_base_bdevs_operational": 3, 00:12:04.297 "base_bdevs_list": [ 00:12:04.297 { 00:12:04.297 "name": "BaseBdev1", 00:12:04.297 "uuid": "72a6944f-54f1-44ec-96b0-498da0ea8bc8", 00:12:04.297 "is_configured": true, 00:12:04.297 "data_offset": 2048, 00:12:04.297 "data_size": 63488 00:12:04.297 }, 00:12:04.297 { 00:12:04.297 "name": "BaseBdev2", 00:12:04.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.297 "is_configured": false, 00:12:04.297 "data_offset": 0, 00:12:04.297 "data_size": 0 00:12:04.297 }, 00:12:04.297 { 00:12:04.297 "name": "BaseBdev3", 00:12:04.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.297 "is_configured": false, 00:12:04.297 "data_offset": 0, 00:12:04.297 "data_size": 0 00:12:04.297 } 00:12:04.297 ] 00:12:04.297 }' 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.297 08:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.861 [2024-11-27 08:44:01.439751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.861 BaseBdev2 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.861 [ 00:12:04.861 { 00:12:04.861 "name": "BaseBdev2", 00:12:04.861 "aliases": [ 00:12:04.861 "b2c8b3b5-4bd6-481e-a62a-d10795ce5b5a" 00:12:04.861 ], 00:12:04.861 "product_name": "Malloc disk", 00:12:04.861 "block_size": 512, 00:12:04.861 "num_blocks": 65536, 00:12:04.861 "uuid": "b2c8b3b5-4bd6-481e-a62a-d10795ce5b5a", 00:12:04.861 "assigned_rate_limits": { 00:12:04.861 "rw_ios_per_sec": 0, 00:12:04.861 "rw_mbytes_per_sec": 0, 00:12:04.861 "r_mbytes_per_sec": 0, 00:12:04.861 "w_mbytes_per_sec": 0 00:12:04.861 }, 00:12:04.861 "claimed": true, 00:12:04.861 "claim_type": "exclusive_write", 00:12:04.861 "zoned": false, 00:12:04.861 "supported_io_types": { 00:12:04.861 "read": true, 00:12:04.861 "write": true, 00:12:04.861 "unmap": true, 00:12:04.861 "flush": true, 00:12:04.861 "reset": true, 00:12:04.861 "nvme_admin": false, 00:12:04.861 "nvme_io": false, 00:12:04.861 "nvme_io_md": false, 00:12:04.861 "write_zeroes": true, 00:12:04.861 "zcopy": true, 00:12:04.861 "get_zone_info": false, 00:12:04.861 "zone_management": false, 00:12:04.861 "zone_append": false, 00:12:04.861 "compare": false, 00:12:04.861 "compare_and_write": false, 00:12:04.861 "abort": true, 00:12:04.861 "seek_hole": false, 00:12:04.861 "seek_data": false, 00:12:04.861 "copy": true, 00:12:04.861 "nvme_iov_md": false 00:12:04.861 }, 00:12:04.861 "memory_domains": [ 00:12:04.861 { 00:12:04.861 "dma_device_id": "system", 00:12:04.861 "dma_device_type": 1 00:12:04.861 }, 00:12:04.861 { 00:12:04.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.861 "dma_device_type": 2 00:12:04.861 } 00:12:04.861 ], 00:12:04.861 "driver_specific": {} 00:12:04.861 } 00:12:04.861 ] 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.861 "name": "Existed_Raid", 00:12:04.861 "uuid": "c597edda-423b-4708-96ff-153a635c1ed2", 00:12:04.861 "strip_size_kb": 64, 00:12:04.861 "state": "configuring", 00:12:04.861 "raid_level": "concat", 00:12:04.861 "superblock": true, 00:12:04.861 "num_base_bdevs": 3, 00:12:04.861 "num_base_bdevs_discovered": 2, 00:12:04.861 "num_base_bdevs_operational": 3, 00:12:04.861 "base_bdevs_list": [ 00:12:04.861 { 00:12:04.861 "name": "BaseBdev1", 00:12:04.861 "uuid": "72a6944f-54f1-44ec-96b0-498da0ea8bc8", 00:12:04.861 "is_configured": true, 00:12:04.861 "data_offset": 2048, 00:12:04.861 "data_size": 63488 00:12:04.861 }, 00:12:04.861 { 00:12:04.861 "name": "BaseBdev2", 00:12:04.861 "uuid": "b2c8b3b5-4bd6-481e-a62a-d10795ce5b5a", 00:12:04.861 "is_configured": true, 00:12:04.861 "data_offset": 2048, 00:12:04.861 "data_size": 63488 00:12:04.861 }, 00:12:04.861 { 00:12:04.861 "name": "BaseBdev3", 00:12:04.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.861 "is_configured": false, 00:12:04.861 "data_offset": 0, 00:12:04.861 "data_size": 0 00:12:04.861 } 00:12:04.861 ] 00:12:04.861 }' 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.861 08:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.426 08:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.426 [2024-11-27 08:44:02.053806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.426 [2024-11-27 08:44:02.054217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:05.426 [2024-11-27 08:44:02.054253] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:05.426 BaseBdev3 00:12:05.426 [2024-11-27 08:44:02.054686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:05.426 [2024-11-27 08:44:02.054946] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:05.426 [2024-11-27 08:44:02.054965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:05.426 [2024-11-27 08:44:02.055160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.426 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.426 [ 00:12:05.426 { 00:12:05.426 "name": "BaseBdev3", 00:12:05.426 "aliases": [ 00:12:05.426 "77e3e422-78bb-4aac-a06b-b361781938eb" 00:12:05.426 ], 00:12:05.426 "product_name": "Malloc disk", 00:12:05.426 "block_size": 512, 00:12:05.426 "num_blocks": 65536, 00:12:05.426 "uuid": "77e3e422-78bb-4aac-a06b-b361781938eb", 00:12:05.426 "assigned_rate_limits": { 00:12:05.426 "rw_ios_per_sec": 0, 00:12:05.426 "rw_mbytes_per_sec": 0, 00:12:05.426 "r_mbytes_per_sec": 0, 00:12:05.426 "w_mbytes_per_sec": 0 00:12:05.426 }, 00:12:05.426 "claimed": true, 00:12:05.426 "claim_type": "exclusive_write", 00:12:05.426 "zoned": false, 00:12:05.426 "supported_io_types": { 00:12:05.426 "read": true, 00:12:05.426 "write": true, 00:12:05.426 "unmap": true, 00:12:05.426 "flush": true, 00:12:05.426 "reset": true, 00:12:05.426 "nvme_admin": false, 00:12:05.426 "nvme_io": false, 00:12:05.426 "nvme_io_md": false, 00:12:05.426 "write_zeroes": true, 00:12:05.426 "zcopy": true, 00:12:05.427 "get_zone_info": false, 00:12:05.427 "zone_management": false, 00:12:05.427 "zone_append": false, 00:12:05.427 "compare": false, 00:12:05.427 "compare_and_write": false, 00:12:05.427 "abort": true, 00:12:05.427 "seek_hole": false, 00:12:05.427 "seek_data": false, 00:12:05.427 "copy": true, 00:12:05.427 "nvme_iov_md": false 00:12:05.427 }, 00:12:05.427 "memory_domains": [ 00:12:05.427 { 00:12:05.427 "dma_device_id": "system", 00:12:05.427 "dma_device_type": 1 00:12:05.427 }, 00:12:05.427 { 00:12:05.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.427 "dma_device_type": 2 00:12:05.427 } 00:12:05.427 ], 00:12:05.427 "driver_specific": {} 00:12:05.427 } 00:12:05.427 ] 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.427 "name": "Existed_Raid", 00:12:05.427 "uuid": "c597edda-423b-4708-96ff-153a635c1ed2", 00:12:05.427 "strip_size_kb": 64, 00:12:05.427 "state": "online", 00:12:05.427 "raid_level": "concat", 00:12:05.427 "superblock": true, 00:12:05.427 "num_base_bdevs": 3, 00:12:05.427 "num_base_bdevs_discovered": 3, 00:12:05.427 "num_base_bdevs_operational": 3, 00:12:05.427 "base_bdevs_list": [ 00:12:05.427 { 00:12:05.427 "name": "BaseBdev1", 00:12:05.427 "uuid": "72a6944f-54f1-44ec-96b0-498da0ea8bc8", 00:12:05.427 "is_configured": true, 00:12:05.427 "data_offset": 2048, 00:12:05.427 "data_size": 63488 00:12:05.427 }, 00:12:05.427 { 00:12:05.427 "name": "BaseBdev2", 00:12:05.427 "uuid": "b2c8b3b5-4bd6-481e-a62a-d10795ce5b5a", 00:12:05.427 "is_configured": true, 00:12:05.427 "data_offset": 2048, 00:12:05.427 "data_size": 63488 00:12:05.427 }, 00:12:05.427 { 00:12:05.427 "name": "BaseBdev3", 00:12:05.427 "uuid": "77e3e422-78bb-4aac-a06b-b361781938eb", 00:12:05.427 "is_configured": true, 00:12:05.427 "data_offset": 2048, 00:12:05.427 "data_size": 63488 00:12:05.427 } 00:12:05.427 ] 00:12:05.427 }' 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.427 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.993 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.993 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.993 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.993 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.993 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.993 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.993 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.993 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.993 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.993 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.993 [2024-11-27 08:44:02.638497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.993 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.993 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.993 "name": "Existed_Raid", 00:12:05.993 "aliases": [ 00:12:05.993 "c597edda-423b-4708-96ff-153a635c1ed2" 00:12:05.993 ], 00:12:05.993 "product_name": "Raid Volume", 00:12:05.993 "block_size": 512, 00:12:05.993 "num_blocks": 190464, 00:12:05.993 "uuid": "c597edda-423b-4708-96ff-153a635c1ed2", 00:12:05.993 "assigned_rate_limits": { 00:12:05.993 "rw_ios_per_sec": 0, 00:12:05.993 "rw_mbytes_per_sec": 0, 00:12:05.993 "r_mbytes_per_sec": 0, 00:12:05.993 "w_mbytes_per_sec": 0 00:12:05.993 }, 00:12:05.993 "claimed": false, 00:12:05.993 "zoned": false, 00:12:05.993 "supported_io_types": { 00:12:05.993 "read": true, 00:12:05.993 "write": true, 00:12:05.993 "unmap": true, 00:12:05.993 "flush": true, 00:12:05.993 "reset": true, 00:12:05.993 "nvme_admin": false, 00:12:05.993 "nvme_io": false, 00:12:05.993 "nvme_io_md": false, 00:12:05.993 "write_zeroes": true, 00:12:05.993 "zcopy": false, 00:12:05.993 "get_zone_info": false, 00:12:05.993 "zone_management": false, 00:12:05.993 "zone_append": false, 00:12:05.993 "compare": false, 00:12:05.993 "compare_and_write": false, 00:12:05.993 "abort": false, 00:12:05.993 "seek_hole": false, 00:12:05.993 "seek_data": false, 00:12:05.993 "copy": false, 00:12:05.993 "nvme_iov_md": false 00:12:05.993 }, 00:12:05.993 "memory_domains": [ 00:12:05.993 { 00:12:05.993 "dma_device_id": "system", 00:12:05.993 "dma_device_type": 1 00:12:05.993 }, 00:12:05.993 { 00:12:05.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.993 "dma_device_type": 2 00:12:05.993 }, 00:12:05.993 { 00:12:05.993 "dma_device_id": "system", 00:12:05.993 "dma_device_type": 1 00:12:05.993 }, 00:12:05.993 { 00:12:05.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.993 "dma_device_type": 2 00:12:05.993 }, 00:12:05.993 { 00:12:05.993 "dma_device_id": "system", 00:12:05.993 "dma_device_type": 1 00:12:05.993 }, 00:12:05.993 { 00:12:05.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.993 "dma_device_type": 2 00:12:05.993 } 00:12:05.993 ], 00:12:05.993 "driver_specific": { 00:12:05.993 "raid": { 00:12:05.993 "uuid": "c597edda-423b-4708-96ff-153a635c1ed2", 00:12:05.993 "strip_size_kb": 64, 00:12:05.993 "state": "online", 00:12:05.993 "raid_level": "concat", 00:12:05.993 "superblock": true, 00:12:05.993 "num_base_bdevs": 3, 00:12:05.993 "num_base_bdevs_discovered": 3, 00:12:05.993 "num_base_bdevs_operational": 3, 00:12:05.993 "base_bdevs_list": [ 00:12:05.993 { 00:12:05.993 "name": "BaseBdev1", 00:12:05.994 "uuid": "72a6944f-54f1-44ec-96b0-498da0ea8bc8", 00:12:05.994 "is_configured": true, 00:12:05.994 "data_offset": 2048, 00:12:05.994 "data_size": 63488 00:12:05.994 }, 00:12:05.994 { 00:12:05.994 "name": "BaseBdev2", 00:12:05.994 "uuid": "b2c8b3b5-4bd6-481e-a62a-d10795ce5b5a", 00:12:05.994 "is_configured": true, 00:12:05.994 "data_offset": 2048, 00:12:05.994 "data_size": 63488 00:12:05.994 }, 00:12:05.994 { 00:12:05.994 "name": "BaseBdev3", 00:12:05.994 "uuid": "77e3e422-78bb-4aac-a06b-b361781938eb", 00:12:05.994 "is_configured": true, 00:12:05.994 "data_offset": 2048, 00:12:05.994 "data_size": 63488 00:12:05.994 } 00:12:05.994 ] 00:12:05.994 } 00:12:05.994 } 00:12:05.994 }' 00:12:05.994 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.994 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:05.994 BaseBdev2 00:12:05.994 BaseBdev3' 00:12:05.994 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.253 08:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.253 [2024-11-27 08:44:02.954387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:06.253 [2024-11-27 08:44:02.954841] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.253 [2024-11-27 08:44:02.955001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.512 "name": "Existed_Raid", 00:12:06.512 "uuid": "c597edda-423b-4708-96ff-153a635c1ed2", 00:12:06.512 "strip_size_kb": 64, 00:12:06.512 "state": "offline", 00:12:06.512 "raid_level": "concat", 00:12:06.512 "superblock": true, 00:12:06.512 "num_base_bdevs": 3, 00:12:06.512 "num_base_bdevs_discovered": 2, 00:12:06.512 "num_base_bdevs_operational": 2, 00:12:06.512 "base_bdevs_list": [ 00:12:06.512 { 00:12:06.512 "name": null, 00:12:06.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.512 "is_configured": false, 00:12:06.512 "data_offset": 0, 00:12:06.512 "data_size": 63488 00:12:06.512 }, 00:12:06.512 { 00:12:06.512 "name": "BaseBdev2", 00:12:06.512 "uuid": "b2c8b3b5-4bd6-481e-a62a-d10795ce5b5a", 00:12:06.512 "is_configured": true, 00:12:06.512 "data_offset": 2048, 00:12:06.512 "data_size": 63488 00:12:06.512 }, 00:12:06.512 { 00:12:06.512 "name": "BaseBdev3", 00:12:06.512 "uuid": "77e3e422-78bb-4aac-a06b-b361781938eb", 00:12:06.512 "is_configured": true, 00:12:06.512 "data_offset": 2048, 00:12:06.512 "data_size": 63488 00:12:06.512 } 00:12:06.512 ] 00:12:06.512 }' 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.512 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.079 [2024-11-27 08:44:03.638061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.079 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.079 [2024-11-27 08:44:03.798552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:07.079 [2024-11-27 08:44:03.798645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.337 BaseBdev2 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.337 08:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.337 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.337 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:07.337 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.337 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.337 [ 00:12:07.337 { 00:12:07.337 "name": "BaseBdev2", 00:12:07.337 "aliases": [ 00:12:07.337 "9afc2a14-af49-4529-82a4-798617942ab9" 00:12:07.337 ], 00:12:07.337 "product_name": "Malloc disk", 00:12:07.337 "block_size": 512, 00:12:07.337 "num_blocks": 65536, 00:12:07.337 "uuid": "9afc2a14-af49-4529-82a4-798617942ab9", 00:12:07.337 "assigned_rate_limits": { 00:12:07.337 "rw_ios_per_sec": 0, 00:12:07.337 "rw_mbytes_per_sec": 0, 00:12:07.337 "r_mbytes_per_sec": 0, 00:12:07.337 "w_mbytes_per_sec": 0 00:12:07.337 }, 00:12:07.337 "claimed": false, 00:12:07.337 "zoned": false, 00:12:07.337 "supported_io_types": { 00:12:07.337 "read": true, 00:12:07.337 "write": true, 00:12:07.337 "unmap": true, 00:12:07.337 "flush": true, 00:12:07.337 "reset": true, 00:12:07.337 "nvme_admin": false, 00:12:07.338 "nvme_io": false, 00:12:07.338 "nvme_io_md": false, 00:12:07.338 "write_zeroes": true, 00:12:07.338 "zcopy": true, 00:12:07.338 "get_zone_info": false, 00:12:07.338 "zone_management": false, 00:12:07.338 "zone_append": false, 00:12:07.338 "compare": false, 00:12:07.338 "compare_and_write": false, 00:12:07.338 "abort": true, 00:12:07.338 "seek_hole": false, 00:12:07.338 "seek_data": false, 00:12:07.338 "copy": true, 00:12:07.338 "nvme_iov_md": false 00:12:07.338 }, 00:12:07.338 "memory_domains": [ 00:12:07.338 { 00:12:07.338 "dma_device_id": "system", 00:12:07.338 "dma_device_type": 1 00:12:07.338 }, 00:12:07.338 { 00:12:07.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.338 "dma_device_type": 2 00:12:07.338 } 00:12:07.338 ], 00:12:07.338 "driver_specific": {} 00:12:07.338 } 00:12:07.338 ] 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.338 BaseBdev3 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.338 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.597 [ 00:12:07.597 { 00:12:07.597 "name": "BaseBdev3", 00:12:07.597 "aliases": [ 00:12:07.597 "af81c486-6ae5-4d76-a3cf-0ff96ce1ca16" 00:12:07.597 ], 00:12:07.597 "product_name": "Malloc disk", 00:12:07.597 "block_size": 512, 00:12:07.597 "num_blocks": 65536, 00:12:07.597 "uuid": "af81c486-6ae5-4d76-a3cf-0ff96ce1ca16", 00:12:07.597 "assigned_rate_limits": { 00:12:07.597 "rw_ios_per_sec": 0, 00:12:07.597 "rw_mbytes_per_sec": 0, 00:12:07.597 "r_mbytes_per_sec": 0, 00:12:07.597 "w_mbytes_per_sec": 0 00:12:07.597 }, 00:12:07.597 "claimed": false, 00:12:07.597 "zoned": false, 00:12:07.597 "supported_io_types": { 00:12:07.597 "read": true, 00:12:07.597 "write": true, 00:12:07.597 "unmap": true, 00:12:07.597 "flush": true, 00:12:07.597 "reset": true, 00:12:07.597 "nvme_admin": false, 00:12:07.597 "nvme_io": false, 00:12:07.597 "nvme_io_md": false, 00:12:07.597 "write_zeroes": true, 00:12:07.597 "zcopy": true, 00:12:07.597 "get_zone_info": false, 00:12:07.597 "zone_management": false, 00:12:07.597 "zone_append": false, 00:12:07.597 "compare": false, 00:12:07.597 "compare_and_write": false, 00:12:07.597 "abort": true, 00:12:07.597 "seek_hole": false, 00:12:07.597 "seek_data": false, 00:12:07.597 "copy": true, 00:12:07.597 "nvme_iov_md": false 00:12:07.597 }, 00:12:07.597 "memory_domains": [ 00:12:07.597 { 00:12:07.597 "dma_device_id": "system", 00:12:07.597 "dma_device_type": 1 00:12:07.597 }, 00:12:07.597 { 00:12:07.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.597 "dma_device_type": 2 00:12:07.597 } 00:12:07.597 ], 00:12:07.597 "driver_specific": {} 00:12:07.597 } 00:12:07.597 ] 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.597 [2024-11-27 08:44:04.114137] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:07.597 [2024-11-27 08:44:04.114484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:07.597 [2024-11-27 08:44:04.114694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.597 [2024-11-27 08:44:04.117539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.597 "name": "Existed_Raid", 00:12:07.597 "uuid": "3eb0f697-394e-4304-8780-474cdf1971d2", 00:12:07.597 "strip_size_kb": 64, 00:12:07.597 "state": "configuring", 00:12:07.597 "raid_level": "concat", 00:12:07.597 "superblock": true, 00:12:07.597 "num_base_bdevs": 3, 00:12:07.597 "num_base_bdevs_discovered": 2, 00:12:07.597 "num_base_bdevs_operational": 3, 00:12:07.597 "base_bdevs_list": [ 00:12:07.597 { 00:12:07.597 "name": "BaseBdev1", 00:12:07.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.597 "is_configured": false, 00:12:07.597 "data_offset": 0, 00:12:07.597 "data_size": 0 00:12:07.597 }, 00:12:07.597 { 00:12:07.597 "name": "BaseBdev2", 00:12:07.597 "uuid": "9afc2a14-af49-4529-82a4-798617942ab9", 00:12:07.597 "is_configured": true, 00:12:07.597 "data_offset": 2048, 00:12:07.597 "data_size": 63488 00:12:07.597 }, 00:12:07.597 { 00:12:07.597 "name": "BaseBdev3", 00:12:07.597 "uuid": "af81c486-6ae5-4d76-a3cf-0ff96ce1ca16", 00:12:07.597 "is_configured": true, 00:12:07.597 "data_offset": 2048, 00:12:07.597 "data_size": 63488 00:12:07.597 } 00:12:07.597 ] 00:12:07.597 }' 00:12:07.597 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.598 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.164 [2024-11-27 08:44:04.642224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.164 "name": "Existed_Raid", 00:12:08.164 "uuid": "3eb0f697-394e-4304-8780-474cdf1971d2", 00:12:08.164 "strip_size_kb": 64, 00:12:08.164 "state": "configuring", 00:12:08.164 "raid_level": "concat", 00:12:08.164 "superblock": true, 00:12:08.164 "num_base_bdevs": 3, 00:12:08.164 "num_base_bdevs_discovered": 1, 00:12:08.164 "num_base_bdevs_operational": 3, 00:12:08.164 "base_bdevs_list": [ 00:12:08.164 { 00:12:08.164 "name": "BaseBdev1", 00:12:08.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.164 "is_configured": false, 00:12:08.164 "data_offset": 0, 00:12:08.164 "data_size": 0 00:12:08.164 }, 00:12:08.164 { 00:12:08.164 "name": null, 00:12:08.164 "uuid": "9afc2a14-af49-4529-82a4-798617942ab9", 00:12:08.164 "is_configured": false, 00:12:08.164 "data_offset": 0, 00:12:08.164 "data_size": 63488 00:12:08.164 }, 00:12:08.164 { 00:12:08.164 "name": "BaseBdev3", 00:12:08.164 "uuid": "af81c486-6ae5-4d76-a3cf-0ff96ce1ca16", 00:12:08.164 "is_configured": true, 00:12:08.164 "data_offset": 2048, 00:12:08.164 "data_size": 63488 00:12:08.164 } 00:12:08.164 ] 00:12:08.164 }' 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.164 08:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.732 [2024-11-27 08:44:05.297133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.732 BaseBdev1 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.732 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.733 [ 00:12:08.733 { 00:12:08.733 "name": "BaseBdev1", 00:12:08.733 "aliases": [ 00:12:08.733 "7d901f3a-eb9e-4e51-bf21-6bebc08eb05a" 00:12:08.733 ], 00:12:08.733 "product_name": "Malloc disk", 00:12:08.733 "block_size": 512, 00:12:08.733 "num_blocks": 65536, 00:12:08.733 "uuid": "7d901f3a-eb9e-4e51-bf21-6bebc08eb05a", 00:12:08.733 "assigned_rate_limits": { 00:12:08.733 "rw_ios_per_sec": 0, 00:12:08.733 "rw_mbytes_per_sec": 0, 00:12:08.733 "r_mbytes_per_sec": 0, 00:12:08.733 "w_mbytes_per_sec": 0 00:12:08.733 }, 00:12:08.733 "claimed": true, 00:12:08.733 "claim_type": "exclusive_write", 00:12:08.733 "zoned": false, 00:12:08.733 "supported_io_types": { 00:12:08.733 "read": true, 00:12:08.733 "write": true, 00:12:08.733 "unmap": true, 00:12:08.733 "flush": true, 00:12:08.733 "reset": true, 00:12:08.733 "nvme_admin": false, 00:12:08.733 "nvme_io": false, 00:12:08.733 "nvme_io_md": false, 00:12:08.733 "write_zeroes": true, 00:12:08.733 "zcopy": true, 00:12:08.733 "get_zone_info": false, 00:12:08.733 "zone_management": false, 00:12:08.733 "zone_append": false, 00:12:08.733 "compare": false, 00:12:08.733 "compare_and_write": false, 00:12:08.733 "abort": true, 00:12:08.733 "seek_hole": false, 00:12:08.733 "seek_data": false, 00:12:08.733 "copy": true, 00:12:08.733 "nvme_iov_md": false 00:12:08.733 }, 00:12:08.733 "memory_domains": [ 00:12:08.733 { 00:12:08.733 "dma_device_id": "system", 00:12:08.733 "dma_device_type": 1 00:12:08.733 }, 00:12:08.733 { 00:12:08.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.733 "dma_device_type": 2 00:12:08.733 } 00:12:08.733 ], 00:12:08.733 "driver_specific": {} 00:12:08.733 } 00:12:08.733 ] 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.733 "name": "Existed_Raid", 00:12:08.733 "uuid": "3eb0f697-394e-4304-8780-474cdf1971d2", 00:12:08.733 "strip_size_kb": 64, 00:12:08.733 "state": "configuring", 00:12:08.733 "raid_level": "concat", 00:12:08.733 "superblock": true, 00:12:08.733 "num_base_bdevs": 3, 00:12:08.733 "num_base_bdevs_discovered": 2, 00:12:08.733 "num_base_bdevs_operational": 3, 00:12:08.733 "base_bdevs_list": [ 00:12:08.733 { 00:12:08.733 "name": "BaseBdev1", 00:12:08.733 "uuid": "7d901f3a-eb9e-4e51-bf21-6bebc08eb05a", 00:12:08.733 "is_configured": true, 00:12:08.733 "data_offset": 2048, 00:12:08.733 "data_size": 63488 00:12:08.733 }, 00:12:08.733 { 00:12:08.733 "name": null, 00:12:08.733 "uuid": "9afc2a14-af49-4529-82a4-798617942ab9", 00:12:08.733 "is_configured": false, 00:12:08.733 "data_offset": 0, 00:12:08.733 "data_size": 63488 00:12:08.733 }, 00:12:08.733 { 00:12:08.733 "name": "BaseBdev3", 00:12:08.733 "uuid": "af81c486-6ae5-4d76-a3cf-0ff96ce1ca16", 00:12:08.733 "is_configured": true, 00:12:08.733 "data_offset": 2048, 00:12:08.733 "data_size": 63488 00:12:08.733 } 00:12:08.733 ] 00:12:08.733 }' 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.733 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.301 [2024-11-27 08:44:05.909427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.301 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.302 "name": "Existed_Raid", 00:12:09.302 "uuid": "3eb0f697-394e-4304-8780-474cdf1971d2", 00:12:09.302 "strip_size_kb": 64, 00:12:09.302 "state": "configuring", 00:12:09.302 "raid_level": "concat", 00:12:09.302 "superblock": true, 00:12:09.302 "num_base_bdevs": 3, 00:12:09.302 "num_base_bdevs_discovered": 1, 00:12:09.302 "num_base_bdevs_operational": 3, 00:12:09.302 "base_bdevs_list": [ 00:12:09.302 { 00:12:09.302 "name": "BaseBdev1", 00:12:09.302 "uuid": "7d901f3a-eb9e-4e51-bf21-6bebc08eb05a", 00:12:09.302 "is_configured": true, 00:12:09.302 "data_offset": 2048, 00:12:09.302 "data_size": 63488 00:12:09.302 }, 00:12:09.302 { 00:12:09.302 "name": null, 00:12:09.302 "uuid": "9afc2a14-af49-4529-82a4-798617942ab9", 00:12:09.302 "is_configured": false, 00:12:09.302 "data_offset": 0, 00:12:09.302 "data_size": 63488 00:12:09.302 }, 00:12:09.302 { 00:12:09.302 "name": null, 00:12:09.302 "uuid": "af81c486-6ae5-4d76-a3cf-0ff96ce1ca16", 00:12:09.302 "is_configured": false, 00:12:09.302 "data_offset": 0, 00:12:09.302 "data_size": 63488 00:12:09.302 } 00:12:09.302 ] 00:12:09.302 }' 00:12:09.302 08:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.302 08:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.870 [2024-11-27 08:44:06.505678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.870 "name": "Existed_Raid", 00:12:09.870 "uuid": "3eb0f697-394e-4304-8780-474cdf1971d2", 00:12:09.870 "strip_size_kb": 64, 00:12:09.870 "state": "configuring", 00:12:09.870 "raid_level": "concat", 00:12:09.870 "superblock": true, 00:12:09.870 "num_base_bdevs": 3, 00:12:09.870 "num_base_bdevs_discovered": 2, 00:12:09.870 "num_base_bdevs_operational": 3, 00:12:09.870 "base_bdevs_list": [ 00:12:09.870 { 00:12:09.870 "name": "BaseBdev1", 00:12:09.870 "uuid": "7d901f3a-eb9e-4e51-bf21-6bebc08eb05a", 00:12:09.870 "is_configured": true, 00:12:09.870 "data_offset": 2048, 00:12:09.870 "data_size": 63488 00:12:09.870 }, 00:12:09.870 { 00:12:09.870 "name": null, 00:12:09.870 "uuid": "9afc2a14-af49-4529-82a4-798617942ab9", 00:12:09.870 "is_configured": false, 00:12:09.870 "data_offset": 0, 00:12:09.870 "data_size": 63488 00:12:09.870 }, 00:12:09.870 { 00:12:09.870 "name": "BaseBdev3", 00:12:09.870 "uuid": "af81c486-6ae5-4d76-a3cf-0ff96ce1ca16", 00:12:09.870 "is_configured": true, 00:12:09.870 "data_offset": 2048, 00:12:09.870 "data_size": 63488 00:12:09.870 } 00:12:09.870 ] 00:12:09.870 }' 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.870 08:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.457 [2024-11-27 08:44:07.085892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.457 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.458 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.458 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.458 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.458 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.716 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.716 "name": "Existed_Raid", 00:12:10.716 "uuid": "3eb0f697-394e-4304-8780-474cdf1971d2", 00:12:10.716 "strip_size_kb": 64, 00:12:10.716 "state": "configuring", 00:12:10.716 "raid_level": "concat", 00:12:10.716 "superblock": true, 00:12:10.716 "num_base_bdevs": 3, 00:12:10.716 "num_base_bdevs_discovered": 1, 00:12:10.716 "num_base_bdevs_operational": 3, 00:12:10.716 "base_bdevs_list": [ 00:12:10.716 { 00:12:10.716 "name": null, 00:12:10.716 "uuid": "7d901f3a-eb9e-4e51-bf21-6bebc08eb05a", 00:12:10.716 "is_configured": false, 00:12:10.716 "data_offset": 0, 00:12:10.716 "data_size": 63488 00:12:10.716 }, 00:12:10.716 { 00:12:10.716 "name": null, 00:12:10.716 "uuid": "9afc2a14-af49-4529-82a4-798617942ab9", 00:12:10.716 "is_configured": false, 00:12:10.716 "data_offset": 0, 00:12:10.716 "data_size": 63488 00:12:10.716 }, 00:12:10.716 { 00:12:10.716 "name": "BaseBdev3", 00:12:10.716 "uuid": "af81c486-6ae5-4d76-a3cf-0ff96ce1ca16", 00:12:10.716 "is_configured": true, 00:12:10.716 "data_offset": 2048, 00:12:10.716 "data_size": 63488 00:12:10.716 } 00:12:10.716 ] 00:12:10.716 }' 00:12:10.716 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.716 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.976 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.976 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:10.976 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.976 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.235 [2024-11-27 08:44:07.778417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.235 "name": "Existed_Raid", 00:12:11.235 "uuid": "3eb0f697-394e-4304-8780-474cdf1971d2", 00:12:11.235 "strip_size_kb": 64, 00:12:11.235 "state": "configuring", 00:12:11.235 "raid_level": "concat", 00:12:11.235 "superblock": true, 00:12:11.235 "num_base_bdevs": 3, 00:12:11.235 "num_base_bdevs_discovered": 2, 00:12:11.235 "num_base_bdevs_operational": 3, 00:12:11.235 "base_bdevs_list": [ 00:12:11.235 { 00:12:11.235 "name": null, 00:12:11.235 "uuid": "7d901f3a-eb9e-4e51-bf21-6bebc08eb05a", 00:12:11.235 "is_configured": false, 00:12:11.235 "data_offset": 0, 00:12:11.235 "data_size": 63488 00:12:11.235 }, 00:12:11.235 { 00:12:11.235 "name": "BaseBdev2", 00:12:11.235 "uuid": "9afc2a14-af49-4529-82a4-798617942ab9", 00:12:11.235 "is_configured": true, 00:12:11.235 "data_offset": 2048, 00:12:11.235 "data_size": 63488 00:12:11.235 }, 00:12:11.235 { 00:12:11.235 "name": "BaseBdev3", 00:12:11.235 "uuid": "af81c486-6ae5-4d76-a3cf-0ff96ce1ca16", 00:12:11.235 "is_configured": true, 00:12:11.235 "data_offset": 2048, 00:12:11.235 "data_size": 63488 00:12:11.235 } 00:12:11.235 ] 00:12:11.235 }' 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.235 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7d901f3a-eb9e-4e51-bf21-6bebc08eb05a 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.803 [2024-11-27 08:44:08.469565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:11.803 [2024-11-27 08:44:08.470003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:11.803 [2024-11-27 08:44:08.470030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:11.803 [2024-11-27 08:44:08.470437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:11.803 [2024-11-27 08:44:08.470655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:11.803 [2024-11-27 08:44:08.470675] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:11.803 NewBaseBdev 00:12:11.803 [2024-11-27 08:44:08.470891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.803 [ 00:12:11.803 { 00:12:11.803 "name": "NewBaseBdev", 00:12:11.803 "aliases": [ 00:12:11.803 "7d901f3a-eb9e-4e51-bf21-6bebc08eb05a" 00:12:11.803 ], 00:12:11.803 "product_name": "Malloc disk", 00:12:11.803 "block_size": 512, 00:12:11.803 "num_blocks": 65536, 00:12:11.803 "uuid": "7d901f3a-eb9e-4e51-bf21-6bebc08eb05a", 00:12:11.803 "assigned_rate_limits": { 00:12:11.803 "rw_ios_per_sec": 0, 00:12:11.803 "rw_mbytes_per_sec": 0, 00:12:11.803 "r_mbytes_per_sec": 0, 00:12:11.803 "w_mbytes_per_sec": 0 00:12:11.803 }, 00:12:11.803 "claimed": true, 00:12:11.803 "claim_type": "exclusive_write", 00:12:11.803 "zoned": false, 00:12:11.803 "supported_io_types": { 00:12:11.803 "read": true, 00:12:11.803 "write": true, 00:12:11.803 "unmap": true, 00:12:11.803 "flush": true, 00:12:11.803 "reset": true, 00:12:11.803 "nvme_admin": false, 00:12:11.803 "nvme_io": false, 00:12:11.803 "nvme_io_md": false, 00:12:11.803 "write_zeroes": true, 00:12:11.803 "zcopy": true, 00:12:11.803 "get_zone_info": false, 00:12:11.803 "zone_management": false, 00:12:11.803 "zone_append": false, 00:12:11.803 "compare": false, 00:12:11.803 "compare_and_write": false, 00:12:11.803 "abort": true, 00:12:11.803 "seek_hole": false, 00:12:11.803 "seek_data": false, 00:12:11.803 "copy": true, 00:12:11.803 "nvme_iov_md": false 00:12:11.803 }, 00:12:11.803 "memory_domains": [ 00:12:11.803 { 00:12:11.803 "dma_device_id": "system", 00:12:11.803 "dma_device_type": 1 00:12:11.803 }, 00:12:11.803 { 00:12:11.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.803 "dma_device_type": 2 00:12:11.803 } 00:12:11.803 ], 00:12:11.803 "driver_specific": {} 00:12:11.803 } 00:12:11.803 ] 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.803 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.063 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.063 "name": "Existed_Raid", 00:12:12.063 "uuid": "3eb0f697-394e-4304-8780-474cdf1971d2", 00:12:12.063 "strip_size_kb": 64, 00:12:12.063 "state": "online", 00:12:12.063 "raid_level": "concat", 00:12:12.063 "superblock": true, 00:12:12.063 "num_base_bdevs": 3, 00:12:12.063 "num_base_bdevs_discovered": 3, 00:12:12.063 "num_base_bdevs_operational": 3, 00:12:12.063 "base_bdevs_list": [ 00:12:12.063 { 00:12:12.063 "name": "NewBaseBdev", 00:12:12.063 "uuid": "7d901f3a-eb9e-4e51-bf21-6bebc08eb05a", 00:12:12.063 "is_configured": true, 00:12:12.063 "data_offset": 2048, 00:12:12.063 "data_size": 63488 00:12:12.063 }, 00:12:12.063 { 00:12:12.063 "name": "BaseBdev2", 00:12:12.063 "uuid": "9afc2a14-af49-4529-82a4-798617942ab9", 00:12:12.063 "is_configured": true, 00:12:12.063 "data_offset": 2048, 00:12:12.063 "data_size": 63488 00:12:12.063 }, 00:12:12.063 { 00:12:12.063 "name": "BaseBdev3", 00:12:12.063 "uuid": "af81c486-6ae5-4d76-a3cf-0ff96ce1ca16", 00:12:12.063 "is_configured": true, 00:12:12.063 "data_offset": 2048, 00:12:12.063 "data_size": 63488 00:12:12.063 } 00:12:12.063 ] 00:12:12.063 }' 00:12:12.063 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.063 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.322 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:12.322 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:12.322 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:12.322 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:12.322 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:12.322 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:12.322 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:12.322 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:12.322 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.322 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.322 [2024-11-27 08:44:09.002189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.322 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.322 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:12.322 "name": "Existed_Raid", 00:12:12.322 "aliases": [ 00:12:12.322 "3eb0f697-394e-4304-8780-474cdf1971d2" 00:12:12.322 ], 00:12:12.322 "product_name": "Raid Volume", 00:12:12.322 "block_size": 512, 00:12:12.322 "num_blocks": 190464, 00:12:12.322 "uuid": "3eb0f697-394e-4304-8780-474cdf1971d2", 00:12:12.322 "assigned_rate_limits": { 00:12:12.322 "rw_ios_per_sec": 0, 00:12:12.322 "rw_mbytes_per_sec": 0, 00:12:12.322 "r_mbytes_per_sec": 0, 00:12:12.322 "w_mbytes_per_sec": 0 00:12:12.322 }, 00:12:12.322 "claimed": false, 00:12:12.322 "zoned": false, 00:12:12.322 "supported_io_types": { 00:12:12.322 "read": true, 00:12:12.322 "write": true, 00:12:12.322 "unmap": true, 00:12:12.322 "flush": true, 00:12:12.322 "reset": true, 00:12:12.323 "nvme_admin": false, 00:12:12.323 "nvme_io": false, 00:12:12.323 "nvme_io_md": false, 00:12:12.323 "write_zeroes": true, 00:12:12.323 "zcopy": false, 00:12:12.323 "get_zone_info": false, 00:12:12.323 "zone_management": false, 00:12:12.323 "zone_append": false, 00:12:12.323 "compare": false, 00:12:12.323 "compare_and_write": false, 00:12:12.323 "abort": false, 00:12:12.323 "seek_hole": false, 00:12:12.323 "seek_data": false, 00:12:12.323 "copy": false, 00:12:12.323 "nvme_iov_md": false 00:12:12.323 }, 00:12:12.323 "memory_domains": [ 00:12:12.323 { 00:12:12.323 "dma_device_id": "system", 00:12:12.323 "dma_device_type": 1 00:12:12.323 }, 00:12:12.323 { 00:12:12.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.323 "dma_device_type": 2 00:12:12.323 }, 00:12:12.323 { 00:12:12.323 "dma_device_id": "system", 00:12:12.323 "dma_device_type": 1 00:12:12.323 }, 00:12:12.323 { 00:12:12.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.323 "dma_device_type": 2 00:12:12.323 }, 00:12:12.323 { 00:12:12.323 "dma_device_id": "system", 00:12:12.323 "dma_device_type": 1 00:12:12.323 }, 00:12:12.323 { 00:12:12.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.323 "dma_device_type": 2 00:12:12.323 } 00:12:12.323 ], 00:12:12.323 "driver_specific": { 00:12:12.323 "raid": { 00:12:12.323 "uuid": "3eb0f697-394e-4304-8780-474cdf1971d2", 00:12:12.323 "strip_size_kb": 64, 00:12:12.323 "state": "online", 00:12:12.323 "raid_level": "concat", 00:12:12.323 "superblock": true, 00:12:12.323 "num_base_bdevs": 3, 00:12:12.323 "num_base_bdevs_discovered": 3, 00:12:12.323 "num_base_bdevs_operational": 3, 00:12:12.323 "base_bdevs_list": [ 00:12:12.323 { 00:12:12.323 "name": "NewBaseBdev", 00:12:12.323 "uuid": "7d901f3a-eb9e-4e51-bf21-6bebc08eb05a", 00:12:12.323 "is_configured": true, 00:12:12.323 "data_offset": 2048, 00:12:12.323 "data_size": 63488 00:12:12.323 }, 00:12:12.323 { 00:12:12.323 "name": "BaseBdev2", 00:12:12.323 "uuid": "9afc2a14-af49-4529-82a4-798617942ab9", 00:12:12.323 "is_configured": true, 00:12:12.323 "data_offset": 2048, 00:12:12.323 "data_size": 63488 00:12:12.323 }, 00:12:12.323 { 00:12:12.323 "name": "BaseBdev3", 00:12:12.323 "uuid": "af81c486-6ae5-4d76-a3cf-0ff96ce1ca16", 00:12:12.323 "is_configured": true, 00:12:12.323 "data_offset": 2048, 00:12:12.323 "data_size": 63488 00:12:12.323 } 00:12:12.323 ] 00:12:12.323 } 00:12:12.323 } 00:12:12.323 }' 00:12:12.323 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:12.582 BaseBdev2 00:12:12.582 BaseBdev3' 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.582 [2024-11-27 08:44:09.317849] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.582 [2024-11-27 08:44:09.317889] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.582 [2024-11-27 08:44:09.318016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.582 [2024-11-27 08:44:09.318104] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.582 [2024-11-27 08:44:09.318140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66385 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' -z 66385 ']' 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # kill -0 66385 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # uname 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:12:12.582 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 66385 00:12:12.841 killing process with pid 66385 00:12:12.841 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:12:12.841 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:12:12.841 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # echo 'killing process with pid 66385' 00:12:12.841 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # kill 66385 00:12:12.841 [2024-11-27 08:44:09.356505] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.841 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@975 -- # wait 66385 00:12:13.099 [2024-11-27 08:44:09.655781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.485 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:14.485 00:12:14.485 real 0m12.169s 00:12:14.485 user 0m19.870s 00:12:14.485 sys 0m1.850s 00:12:14.485 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # xtrace_disable 00:12:14.485 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.485 ************************************ 00:12:14.485 END TEST raid_state_function_test_sb 00:12:14.485 ************************************ 00:12:14.485 08:44:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:12:14.485 08:44:10 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:12:14.485 08:44:10 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:12:14.485 08:44:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.485 ************************************ 00:12:14.485 START TEST raid_superblock_test 00:12:14.485 ************************************ 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # raid_superblock_test concat 3 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:14.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67022 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67022 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # '[' -z 67022 ']' 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:12:14.485 08:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.485 [2024-11-27 08:44:10.992428] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:12:14.485 [2024-11-27 08:44:10.992647] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67022 ] 00:12:14.485 [2024-11-27 08:44:11.182745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.796 [2024-11-27 08:44:11.338670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.087 [2024-11-27 08:44:11.573625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.088 [2024-11-27 08:44:11.573731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@865 -- # return 0 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.346 malloc1 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:15.346 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.347 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.347 [2024-11-27 08:44:12.098751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:15.347 [2024-11-27 08:44:12.099020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.347 [2024-11-27 08:44:12.099196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:15.347 [2024-11-27 08:44:12.099346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.347 [2024-11-27 08:44:12.102588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.347 [2024-11-27 08:44:12.102805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:15.347 pt1 00:12:15.606 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.606 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:15.606 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.606 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:15.606 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:15.606 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.607 malloc2 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.607 [2024-11-27 08:44:12.158972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:15.607 [2024-11-27 08:44:12.159065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.607 [2024-11-27 08:44:12.159098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:15.607 [2024-11-27 08:44:12.159113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.607 [2024-11-27 08:44:12.162282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.607 [2024-11-27 08:44:12.162329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:15.607 pt2 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.607 malloc3 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.607 [2024-11-27 08:44:12.234055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:15.607 [2024-11-27 08:44:12.234175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.607 [2024-11-27 08:44:12.234213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:15.607 [2024-11-27 08:44:12.234231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.607 [2024-11-27 08:44:12.237378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.607 [2024-11-27 08:44:12.237454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:15.607 pt3 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.607 [2024-11-27 08:44:12.246342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:15.607 [2024-11-27 08:44:12.249591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:15.607 [2024-11-27 08:44:12.249842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:15.607 [2024-11-27 08:44:12.250162] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:15.607 [2024-11-27 08:44:12.250324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:15.607 [2024-11-27 08:44:12.250786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:15.607 [2024-11-27 08:44:12.251215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:15.607 [2024-11-27 08:44:12.251384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:15.607 [2024-11-27 08:44:12.251829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.607 "name": "raid_bdev1", 00:12:15.607 "uuid": "e358e5d4-9240-418e-bd66-246f9576b180", 00:12:15.607 "strip_size_kb": 64, 00:12:15.607 "state": "online", 00:12:15.607 "raid_level": "concat", 00:12:15.607 "superblock": true, 00:12:15.607 "num_base_bdevs": 3, 00:12:15.607 "num_base_bdevs_discovered": 3, 00:12:15.607 "num_base_bdevs_operational": 3, 00:12:15.607 "base_bdevs_list": [ 00:12:15.607 { 00:12:15.607 "name": "pt1", 00:12:15.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:15.607 "is_configured": true, 00:12:15.607 "data_offset": 2048, 00:12:15.607 "data_size": 63488 00:12:15.607 }, 00:12:15.607 { 00:12:15.607 "name": "pt2", 00:12:15.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.607 "is_configured": true, 00:12:15.607 "data_offset": 2048, 00:12:15.607 "data_size": 63488 00:12:15.607 }, 00:12:15.607 { 00:12:15.607 "name": "pt3", 00:12:15.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.607 "is_configured": true, 00:12:15.607 "data_offset": 2048, 00:12:15.607 "data_size": 63488 00:12:15.607 } 00:12:15.607 ] 00:12:15.607 }' 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.607 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.175 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:16.175 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:16.175 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:16.175 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:16.175 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:16.175 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:16.175 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.175 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:16.175 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.175 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.175 [2024-11-27 08:44:12.807041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.175 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.175 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:16.175 "name": "raid_bdev1", 00:12:16.175 "aliases": [ 00:12:16.175 "e358e5d4-9240-418e-bd66-246f9576b180" 00:12:16.175 ], 00:12:16.175 "product_name": "Raid Volume", 00:12:16.175 "block_size": 512, 00:12:16.175 "num_blocks": 190464, 00:12:16.175 "uuid": "e358e5d4-9240-418e-bd66-246f9576b180", 00:12:16.175 "assigned_rate_limits": { 00:12:16.175 "rw_ios_per_sec": 0, 00:12:16.175 "rw_mbytes_per_sec": 0, 00:12:16.175 "r_mbytes_per_sec": 0, 00:12:16.175 "w_mbytes_per_sec": 0 00:12:16.175 }, 00:12:16.175 "claimed": false, 00:12:16.175 "zoned": false, 00:12:16.175 "supported_io_types": { 00:12:16.175 "read": true, 00:12:16.175 "write": true, 00:12:16.175 "unmap": true, 00:12:16.175 "flush": true, 00:12:16.175 "reset": true, 00:12:16.175 "nvme_admin": false, 00:12:16.175 "nvme_io": false, 00:12:16.175 "nvme_io_md": false, 00:12:16.175 "write_zeroes": true, 00:12:16.175 "zcopy": false, 00:12:16.175 "get_zone_info": false, 00:12:16.175 "zone_management": false, 00:12:16.175 "zone_append": false, 00:12:16.175 "compare": false, 00:12:16.175 "compare_and_write": false, 00:12:16.175 "abort": false, 00:12:16.175 "seek_hole": false, 00:12:16.175 "seek_data": false, 00:12:16.175 "copy": false, 00:12:16.175 "nvme_iov_md": false 00:12:16.175 }, 00:12:16.175 "memory_domains": [ 00:12:16.175 { 00:12:16.176 "dma_device_id": "system", 00:12:16.176 "dma_device_type": 1 00:12:16.176 }, 00:12:16.176 { 00:12:16.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.176 "dma_device_type": 2 00:12:16.176 }, 00:12:16.176 { 00:12:16.176 "dma_device_id": "system", 00:12:16.176 "dma_device_type": 1 00:12:16.176 }, 00:12:16.176 { 00:12:16.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.176 "dma_device_type": 2 00:12:16.176 }, 00:12:16.176 { 00:12:16.176 "dma_device_id": "system", 00:12:16.176 "dma_device_type": 1 00:12:16.176 }, 00:12:16.176 { 00:12:16.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.176 "dma_device_type": 2 00:12:16.176 } 00:12:16.176 ], 00:12:16.176 "driver_specific": { 00:12:16.176 "raid": { 00:12:16.176 "uuid": "e358e5d4-9240-418e-bd66-246f9576b180", 00:12:16.176 "strip_size_kb": 64, 00:12:16.176 "state": "online", 00:12:16.176 "raid_level": "concat", 00:12:16.176 "superblock": true, 00:12:16.176 "num_base_bdevs": 3, 00:12:16.176 "num_base_bdevs_discovered": 3, 00:12:16.176 "num_base_bdevs_operational": 3, 00:12:16.176 "base_bdevs_list": [ 00:12:16.176 { 00:12:16.176 "name": "pt1", 00:12:16.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.176 "is_configured": true, 00:12:16.176 "data_offset": 2048, 00:12:16.176 "data_size": 63488 00:12:16.176 }, 00:12:16.176 { 00:12:16.176 "name": "pt2", 00:12:16.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.176 "is_configured": true, 00:12:16.176 "data_offset": 2048, 00:12:16.176 "data_size": 63488 00:12:16.176 }, 00:12:16.176 { 00:12:16.176 "name": "pt3", 00:12:16.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.176 "is_configured": true, 00:12:16.176 "data_offset": 2048, 00:12:16.176 "data_size": 63488 00:12:16.176 } 00:12:16.176 ] 00:12:16.176 } 00:12:16.176 } 00:12:16.176 }' 00:12:16.176 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.176 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:16.176 pt2 00:12:16.176 pt3' 00:12:16.176 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.435 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:16.436 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.436 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:16.436 08:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.436 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.436 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.436 08:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:16.436 [2024-11-27 08:44:13.130970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e358e5d4-9240-418e-bd66-246f9576b180 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e358e5d4-9240-418e-bd66-246f9576b180 ']' 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.436 [2024-11-27 08:44:13.186607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.436 [2024-11-27 08:44:13.186860] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.436 [2024-11-27 08:44:13.187013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.436 [2024-11-27 08:44:13.187137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:16.436 [2024-11-27 08:44:13.187171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:16.436 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.696 [2024-11-27 08:44:13.342715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:16.696 [2024-11-27 08:44:13.345820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:16.696 [2024-11-27 08:44:13.346046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:16.696 [2024-11-27 08:44:13.346208] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:16.696 [2024-11-27 08:44:13.346465] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:16.696 [2024-11-27 08:44:13.346709] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:16.696 [2024-11-27 08:44:13.346914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.696 [2024-11-27 08:44:13.347075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:16.696 request: 00:12:16.696 { 00:12:16.696 "name": "raid_bdev1", 00:12:16.696 "raid_level": "concat", 00:12:16.696 "base_bdevs": [ 00:12:16.696 "malloc1", 00:12:16.696 "malloc2", 00:12:16.696 "malloc3" 00:12:16.696 ], 00:12:16.696 "strip_size_kb": 64, 00:12:16.696 "superblock": false, 00:12:16.696 "method": "bdev_raid_create", 00:12:16.696 "req_id": 1 00:12:16.696 } 00:12:16.696 Got JSON-RPC error response 00:12:16.696 response: 00:12:16.696 { 00:12:16.696 "code": -17, 00:12:16.696 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:16.696 } 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.696 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.696 [2024-11-27 08:44:13.415574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:16.696 [2024-11-27 08:44:13.415711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.696 [2024-11-27 08:44:13.415789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:16.696 [2024-11-27 08:44:13.415807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.696 [2024-11-27 08:44:13.419781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.696 [2024-11-27 08:44:13.419837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:16.696 [2024-11-27 08:44:13.420007] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:16.697 [2024-11-27 08:44:13.420115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:16.697 pt1 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.697 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.956 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.956 "name": "raid_bdev1", 00:12:16.956 "uuid": "e358e5d4-9240-418e-bd66-246f9576b180", 00:12:16.956 "strip_size_kb": 64, 00:12:16.956 "state": "configuring", 00:12:16.956 "raid_level": "concat", 00:12:16.956 "superblock": true, 00:12:16.956 "num_base_bdevs": 3, 00:12:16.956 "num_base_bdevs_discovered": 1, 00:12:16.956 "num_base_bdevs_operational": 3, 00:12:16.956 "base_bdevs_list": [ 00:12:16.956 { 00:12:16.956 "name": "pt1", 00:12:16.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.956 "is_configured": true, 00:12:16.956 "data_offset": 2048, 00:12:16.956 "data_size": 63488 00:12:16.956 }, 00:12:16.956 { 00:12:16.956 "name": null, 00:12:16.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.956 "is_configured": false, 00:12:16.956 "data_offset": 2048, 00:12:16.956 "data_size": 63488 00:12:16.956 }, 00:12:16.956 { 00:12:16.956 "name": null, 00:12:16.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.956 "is_configured": false, 00:12:16.956 "data_offset": 2048, 00:12:16.956 "data_size": 63488 00:12:16.956 } 00:12:16.956 ] 00:12:16.956 }' 00:12:16.956 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.956 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.215 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:17.215 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:17.215 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.215 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.474 [2024-11-27 08:44:13.976346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:17.474 [2024-11-27 08:44:13.976639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.474 [2024-11-27 08:44:13.976712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:17.474 [2024-11-27 08:44:13.976733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.474 [2024-11-27 08:44:13.977531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.474 [2024-11-27 08:44:13.977574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:17.474 [2024-11-27 08:44:13.977711] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:17.474 [2024-11-27 08:44:13.977748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:17.474 pt2 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.474 [2024-11-27 08:44:13.984271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.474 08:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.475 08:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.475 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.475 "name": "raid_bdev1", 00:12:17.475 "uuid": "e358e5d4-9240-418e-bd66-246f9576b180", 00:12:17.475 "strip_size_kb": 64, 00:12:17.475 "state": "configuring", 00:12:17.475 "raid_level": "concat", 00:12:17.475 "superblock": true, 00:12:17.475 "num_base_bdevs": 3, 00:12:17.475 "num_base_bdevs_discovered": 1, 00:12:17.475 "num_base_bdevs_operational": 3, 00:12:17.475 "base_bdevs_list": [ 00:12:17.475 { 00:12:17.475 "name": "pt1", 00:12:17.475 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.475 "is_configured": true, 00:12:17.475 "data_offset": 2048, 00:12:17.475 "data_size": 63488 00:12:17.475 }, 00:12:17.475 { 00:12:17.475 "name": null, 00:12:17.475 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.475 "is_configured": false, 00:12:17.475 "data_offset": 0, 00:12:17.475 "data_size": 63488 00:12:17.475 }, 00:12:17.475 { 00:12:17.475 "name": null, 00:12:17.475 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.475 "is_configured": false, 00:12:17.475 "data_offset": 2048, 00:12:17.475 "data_size": 63488 00:12:17.475 } 00:12:17.475 ] 00:12:17.475 }' 00:12:17.475 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.475 08:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.045 [2024-11-27 08:44:14.556433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:18.045 [2024-11-27 08:44:14.556537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.045 [2024-11-27 08:44:14.556572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:18.045 [2024-11-27 08:44:14.556592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.045 [2024-11-27 08:44:14.557315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.045 [2024-11-27 08:44:14.557349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:18.045 [2024-11-27 08:44:14.557494] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:18.045 [2024-11-27 08:44:14.557539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:18.045 pt2 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.045 [2024-11-27 08:44:14.564361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:18.045 [2024-11-27 08:44:14.564432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.045 [2024-11-27 08:44:14.564458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:18.045 [2024-11-27 08:44:14.564476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.045 [2024-11-27 08:44:14.564933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.045 [2024-11-27 08:44:14.564987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:18.045 [2024-11-27 08:44:14.565066] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:18.045 [2024-11-27 08:44:14.565100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:18.045 [2024-11-27 08:44:14.565253] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:18.045 [2024-11-27 08:44:14.565276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:18.045 [2024-11-27 08:44:14.565637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:18.045 [2024-11-27 08:44:14.565841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:18.045 [2024-11-27 08:44:14.565857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:18.045 [2024-11-27 08:44:14.566034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.045 pt3 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.045 "name": "raid_bdev1", 00:12:18.045 "uuid": "e358e5d4-9240-418e-bd66-246f9576b180", 00:12:18.045 "strip_size_kb": 64, 00:12:18.045 "state": "online", 00:12:18.045 "raid_level": "concat", 00:12:18.045 "superblock": true, 00:12:18.045 "num_base_bdevs": 3, 00:12:18.045 "num_base_bdevs_discovered": 3, 00:12:18.045 "num_base_bdevs_operational": 3, 00:12:18.045 "base_bdevs_list": [ 00:12:18.045 { 00:12:18.045 "name": "pt1", 00:12:18.045 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.045 "is_configured": true, 00:12:18.045 "data_offset": 2048, 00:12:18.045 "data_size": 63488 00:12:18.045 }, 00:12:18.045 { 00:12:18.045 "name": "pt2", 00:12:18.045 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.045 "is_configured": true, 00:12:18.045 "data_offset": 2048, 00:12:18.045 "data_size": 63488 00:12:18.045 }, 00:12:18.045 { 00:12:18.045 "name": "pt3", 00:12:18.045 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.045 "is_configured": true, 00:12:18.045 "data_offset": 2048, 00:12:18.045 "data_size": 63488 00:12:18.045 } 00:12:18.045 ] 00:12:18.045 }' 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.045 08:44:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.738 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:18.738 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:18.738 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:18.738 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:18.738 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:18.738 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:18.738 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:18.738 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:18.738 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.738 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.738 [2024-11-27 08:44:15.137095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.738 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.738 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:18.738 "name": "raid_bdev1", 00:12:18.738 "aliases": [ 00:12:18.738 "e358e5d4-9240-418e-bd66-246f9576b180" 00:12:18.738 ], 00:12:18.739 "product_name": "Raid Volume", 00:12:18.739 "block_size": 512, 00:12:18.739 "num_blocks": 190464, 00:12:18.739 "uuid": "e358e5d4-9240-418e-bd66-246f9576b180", 00:12:18.739 "assigned_rate_limits": { 00:12:18.739 "rw_ios_per_sec": 0, 00:12:18.739 "rw_mbytes_per_sec": 0, 00:12:18.739 "r_mbytes_per_sec": 0, 00:12:18.739 "w_mbytes_per_sec": 0 00:12:18.739 }, 00:12:18.739 "claimed": false, 00:12:18.739 "zoned": false, 00:12:18.739 "supported_io_types": { 00:12:18.739 "read": true, 00:12:18.739 "write": true, 00:12:18.739 "unmap": true, 00:12:18.739 "flush": true, 00:12:18.739 "reset": true, 00:12:18.739 "nvme_admin": false, 00:12:18.739 "nvme_io": false, 00:12:18.739 "nvme_io_md": false, 00:12:18.739 "write_zeroes": true, 00:12:18.739 "zcopy": false, 00:12:18.739 "get_zone_info": false, 00:12:18.739 "zone_management": false, 00:12:18.739 "zone_append": false, 00:12:18.739 "compare": false, 00:12:18.739 "compare_and_write": false, 00:12:18.739 "abort": false, 00:12:18.739 "seek_hole": false, 00:12:18.739 "seek_data": false, 00:12:18.739 "copy": false, 00:12:18.739 "nvme_iov_md": false 00:12:18.739 }, 00:12:18.739 "memory_domains": [ 00:12:18.739 { 00:12:18.739 "dma_device_id": "system", 00:12:18.739 "dma_device_type": 1 00:12:18.739 }, 00:12:18.739 { 00:12:18.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.739 "dma_device_type": 2 00:12:18.739 }, 00:12:18.739 { 00:12:18.739 "dma_device_id": "system", 00:12:18.739 "dma_device_type": 1 00:12:18.739 }, 00:12:18.739 { 00:12:18.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.739 "dma_device_type": 2 00:12:18.739 }, 00:12:18.739 { 00:12:18.739 "dma_device_id": "system", 00:12:18.739 "dma_device_type": 1 00:12:18.739 }, 00:12:18.739 { 00:12:18.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.739 "dma_device_type": 2 00:12:18.739 } 00:12:18.739 ], 00:12:18.739 "driver_specific": { 00:12:18.739 "raid": { 00:12:18.739 "uuid": "e358e5d4-9240-418e-bd66-246f9576b180", 00:12:18.739 "strip_size_kb": 64, 00:12:18.739 "state": "online", 00:12:18.739 "raid_level": "concat", 00:12:18.739 "superblock": true, 00:12:18.739 "num_base_bdevs": 3, 00:12:18.739 "num_base_bdevs_discovered": 3, 00:12:18.739 "num_base_bdevs_operational": 3, 00:12:18.739 "base_bdevs_list": [ 00:12:18.739 { 00:12:18.739 "name": "pt1", 00:12:18.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.739 "is_configured": true, 00:12:18.739 "data_offset": 2048, 00:12:18.739 "data_size": 63488 00:12:18.739 }, 00:12:18.739 { 00:12:18.740 "name": "pt2", 00:12:18.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.740 "is_configured": true, 00:12:18.740 "data_offset": 2048, 00:12:18.740 "data_size": 63488 00:12:18.740 }, 00:12:18.740 { 00:12:18.740 "name": "pt3", 00:12:18.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.740 "is_configured": true, 00:12:18.740 "data_offset": 2048, 00:12:18.740 "data_size": 63488 00:12:18.740 } 00:12:18.740 ] 00:12:18.740 } 00:12:18.740 } 00:12:18.740 }' 00:12:18.740 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:18.740 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:18.740 pt2 00:12:18.740 pt3' 00:12:18.740 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.740 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.741 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:19.001 [2024-11-27 08:44:15.469009] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e358e5d4-9240-418e-bd66-246f9576b180 '!=' e358e5d4-9240-418e-bd66-246f9576b180 ']' 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67022 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' -z 67022 ']' 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # kill -0 67022 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # uname 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 67022 00:12:19.001 killing process with pid 67022 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:12:19.001 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 67022' 00:12:19.002 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # kill 67022 00:12:19.002 [2024-11-27 08:44:15.553879] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.002 08:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@975 -- # wait 67022 00:12:19.002 [2024-11-27 08:44:15.554011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.002 [2024-11-27 08:44:15.554161] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.002 [2024-11-27 08:44:15.554183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:19.261 [2024-11-27 08:44:15.860220] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.640 08:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:20.640 00:12:20.640 real 0m6.122s 00:12:20.640 user 0m9.122s 00:12:20.640 sys 0m0.981s 00:12:20.640 08:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:12:20.640 ************************************ 00:12:20.640 END TEST raid_superblock_test 00:12:20.640 ************************************ 00:12:20.640 08:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.640 08:44:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:12:20.640 08:44:17 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:12:20.640 08:44:17 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:12:20.640 08:44:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.640 ************************************ 00:12:20.640 START TEST raid_read_error_test 00:12:20.640 ************************************ 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test concat 3 read 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oS69afHh4l 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67286 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67286 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # '[' -z 67286 ']' 00:12:20.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:12:20.640 08:44:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.640 [2024-11-27 08:44:17.177675] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:12:20.640 [2024-11-27 08:44:17.178226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67286 ] 00:12:20.640 [2024-11-27 08:44:17.358278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.899 [2024-11-27 08:44:17.508330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.158 [2024-11-27 08:44:17.737494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.158 [2024-11-27 08:44:17.737602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@865 -- # return 0 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.726 BaseBdev1_malloc 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.726 true 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.726 [2024-11-27 08:44:18.332931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:21.726 [2024-11-27 08:44:18.333065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.726 [2024-11-27 08:44:18.333133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:21.726 [2024-11-27 08:44:18.333154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.726 [2024-11-27 08:44:18.336793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.726 [2024-11-27 08:44:18.336885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:21.726 BaseBdev1 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.726 BaseBdev2_malloc 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.726 true 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.726 [2024-11-27 08:44:18.402623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:21.726 [2024-11-27 08:44:18.403142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.726 [2024-11-27 08:44:18.403194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:21.726 [2024-11-27 08:44:18.403218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.726 [2024-11-27 08:44:18.407010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.726 [2024-11-27 08:44:18.407080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:21.726 BaseBdev2 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.726 BaseBdev3_malloc 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.726 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.985 true 00:12:21.985 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.985 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:21.985 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.985 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.985 [2024-11-27 08:44:18.491397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:21.985 [2024-11-27 08:44:18.491553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.985 [2024-11-27 08:44:18.491602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:21.985 [2024-11-27 08:44:18.491625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.986 [2024-11-27 08:44:18.495474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.986 [2024-11-27 08:44:18.495526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:21.986 BaseBdev3 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.986 [2024-11-27 08:44:18.503924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.986 [2024-11-27 08:44:18.506917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.986 [2024-11-27 08:44:18.507265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:21.986 [2024-11-27 08:44:18.507595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:21.986 [2024-11-27 08:44:18.507617] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:21.986 [2024-11-27 08:44:18.508011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:21.986 [2024-11-27 08:44:18.508271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:21.986 [2024-11-27 08:44:18.508296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:21.986 [2024-11-27 08:44:18.508581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.986 "name": "raid_bdev1", 00:12:21.986 "uuid": "910e229e-5662-46c1-96dd-93b675a6ddb0", 00:12:21.986 "strip_size_kb": 64, 00:12:21.986 "state": "online", 00:12:21.986 "raid_level": "concat", 00:12:21.986 "superblock": true, 00:12:21.986 "num_base_bdevs": 3, 00:12:21.986 "num_base_bdevs_discovered": 3, 00:12:21.986 "num_base_bdevs_operational": 3, 00:12:21.986 "base_bdevs_list": [ 00:12:21.986 { 00:12:21.986 "name": "BaseBdev1", 00:12:21.986 "uuid": "a8e98da7-685a-5c0f-8def-3aabd7001a24", 00:12:21.986 "is_configured": true, 00:12:21.986 "data_offset": 2048, 00:12:21.986 "data_size": 63488 00:12:21.986 }, 00:12:21.986 { 00:12:21.986 "name": "BaseBdev2", 00:12:21.986 "uuid": "49bb0b26-639b-575a-91b7-b5bb8d7e5a47", 00:12:21.986 "is_configured": true, 00:12:21.986 "data_offset": 2048, 00:12:21.986 "data_size": 63488 00:12:21.986 }, 00:12:21.986 { 00:12:21.986 "name": "BaseBdev3", 00:12:21.986 "uuid": "5d3cb868-bc29-537d-af3f-d875ab488d00", 00:12:21.986 "is_configured": true, 00:12:21.986 "data_offset": 2048, 00:12:21.986 "data_size": 63488 00:12:21.986 } 00:12:21.986 ] 00:12:21.986 }' 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.986 08:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.558 08:44:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:22.558 08:44:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:22.558 [2024-11-27 08:44:19.142323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.495 "name": "raid_bdev1", 00:12:23.495 "uuid": "910e229e-5662-46c1-96dd-93b675a6ddb0", 00:12:23.495 "strip_size_kb": 64, 00:12:23.495 "state": "online", 00:12:23.495 "raid_level": "concat", 00:12:23.495 "superblock": true, 00:12:23.495 "num_base_bdevs": 3, 00:12:23.495 "num_base_bdevs_discovered": 3, 00:12:23.495 "num_base_bdevs_operational": 3, 00:12:23.495 "base_bdevs_list": [ 00:12:23.495 { 00:12:23.495 "name": "BaseBdev1", 00:12:23.495 "uuid": "a8e98da7-685a-5c0f-8def-3aabd7001a24", 00:12:23.495 "is_configured": true, 00:12:23.495 "data_offset": 2048, 00:12:23.495 "data_size": 63488 00:12:23.495 }, 00:12:23.495 { 00:12:23.495 "name": "BaseBdev2", 00:12:23.495 "uuid": "49bb0b26-639b-575a-91b7-b5bb8d7e5a47", 00:12:23.495 "is_configured": true, 00:12:23.495 "data_offset": 2048, 00:12:23.495 "data_size": 63488 00:12:23.495 }, 00:12:23.495 { 00:12:23.495 "name": "BaseBdev3", 00:12:23.495 "uuid": "5d3cb868-bc29-537d-af3f-d875ab488d00", 00:12:23.495 "is_configured": true, 00:12:23.495 "data_offset": 2048, 00:12:23.495 "data_size": 63488 00:12:23.495 } 00:12:23.495 ] 00:12:23.495 }' 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.495 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.163 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:24.163 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.163 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.163 [2024-11-27 08:44:20.549689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:24.163 [2024-11-27 08:44:20.549761] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.163 [2024-11-27 08:44:20.553612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.163 [2024-11-27 08:44:20.553944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.163 [2024-11-27 08:44:20.554184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.163 [2024-11-27 08:44:20.554444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:24.163 { 00:12:24.163 "results": [ 00:12:24.163 { 00:12:24.163 "job": "raid_bdev1", 00:12:24.163 "core_mask": "0x1", 00:12:24.163 "workload": "randrw", 00:12:24.163 "percentage": 50, 00:12:24.163 "status": "finished", 00:12:24.163 "queue_depth": 1, 00:12:24.163 "io_size": 131072, 00:12:24.163 "runtime": 1.404239, 00:12:24.163 "iops": 9224.925386632902, 00:12:24.163 "mibps": 1153.1156733291127, 00:12:24.163 "io_failed": 1, 00:12:24.163 "io_timeout": 0, 00:12:24.163 "avg_latency_us": 152.90531223465842, 00:12:24.163 "min_latency_us": 38.86545454545455, 00:12:24.163 "max_latency_us": 1861.8181818181818 00:12:24.163 } 00:12:24.163 ], 00:12:24.163 "core_count": 1 00:12:24.163 } 00:12:24.163 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.163 08:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67286 00:12:24.163 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' -z 67286 ']' 00:12:24.163 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # kill -0 67286 00:12:24.163 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # uname 00:12:24.163 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:12:24.163 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 67286 00:12:24.163 killing process with pid 67286 00:12:24.163 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:12:24.163 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:12:24.163 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 67286' 00:12:24.163 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # kill 67286 00:12:24.163 08:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@975 -- # wait 67286 00:12:24.163 [2024-11-27 08:44:20.594600] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:24.163 [2024-11-27 08:44:20.822808] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:25.539 08:44:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oS69afHh4l 00:12:25.539 08:44:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:25.539 08:44:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:25.539 08:44:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:25.539 08:44:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:25.539 ************************************ 00:12:25.539 END TEST raid_read_error_test 00:12:25.539 ************************************ 00:12:25.539 08:44:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:25.539 08:44:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:25.539 08:44:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:25.539 00:12:25.539 real 0m4.924s 00:12:25.539 user 0m6.045s 00:12:25.539 sys 0m0.708s 00:12:25.539 08:44:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:12:25.539 08:44:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.539 08:44:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:12:25.539 08:44:22 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:12:25.540 08:44:22 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:12:25.540 08:44:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:25.540 ************************************ 00:12:25.540 START TEST raid_write_error_test 00:12:25.540 ************************************ 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test concat 3 write 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Y0jtNLoAjw 00:12:25.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67432 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67432 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # '[' -z 67432 ']' 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:12:25.540 08:44:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.540 [2024-11-27 08:44:22.146259] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:12:25.540 [2024-11-27 08:44:22.146816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67432 ] 00:12:25.799 [2024-11-27 08:44:22.326468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.799 [2024-11-27 08:44:22.475439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.056 [2024-11-27 08:44:22.702983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.056 [2024-11-27 08:44:22.703324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@865 -- # return 0 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.624 BaseBdev1_malloc 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.624 true 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.624 [2024-11-27 08:44:23.228131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:26.624 [2024-11-27 08:44:23.228237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.624 [2024-11-27 08:44:23.228272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:26.624 [2024-11-27 08:44:23.228292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.624 [2024-11-27 08:44:23.231454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.624 [2024-11-27 08:44:23.231508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:26.624 BaseBdev1 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.624 BaseBdev2_malloc 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.624 true 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.624 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.624 [2024-11-27 08:44:23.296837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:26.624 [2024-11-27 08:44:23.296961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.624 [2024-11-27 08:44:23.296993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:26.625 [2024-11-27 08:44:23.297012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.625 [2024-11-27 08:44:23.300376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.625 [2024-11-27 08:44:23.300436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:26.625 BaseBdev2 00:12:26.625 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.625 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:26.625 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:26.625 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.625 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.625 BaseBdev3_malloc 00:12:26.625 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.625 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:26.625 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.625 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.625 true 00:12:26.625 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.625 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:26.625 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.625 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.625 [2024-11-27 08:44:23.377064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:26.625 [2024-11-27 08:44:23.377177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.625 [2024-11-27 08:44:23.377211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:26.625 [2024-11-27 08:44:23.377232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.625 [2024-11-27 08:44:23.380660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.625 [2024-11-27 08:44:23.380735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:26.625 BaseBdev3 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.883 [2024-11-27 08:44:23.385234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:26.883 [2024-11-27 08:44:23.388092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.883 [2024-11-27 08:44:23.388220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.883 [2024-11-27 08:44:23.388547] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:26.883 [2024-11-27 08:44:23.388568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:26.883 [2024-11-27 08:44:23.388983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:26.883 [2024-11-27 08:44:23.389237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:26.883 [2024-11-27 08:44:23.389262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:26.883 [2024-11-27 08:44:23.389564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.883 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.883 "name": "raid_bdev1", 00:12:26.883 "uuid": "4fa0848a-62c7-4aba-874a-adeb963410db", 00:12:26.883 "strip_size_kb": 64, 00:12:26.883 "state": "online", 00:12:26.883 "raid_level": "concat", 00:12:26.883 "superblock": true, 00:12:26.883 "num_base_bdevs": 3, 00:12:26.883 "num_base_bdevs_discovered": 3, 00:12:26.883 "num_base_bdevs_operational": 3, 00:12:26.883 "base_bdevs_list": [ 00:12:26.883 { 00:12:26.883 "name": "BaseBdev1", 00:12:26.883 "uuid": "b2e9e5c8-8936-572f-b78c-12174dafd73c", 00:12:26.884 "is_configured": true, 00:12:26.884 "data_offset": 2048, 00:12:26.884 "data_size": 63488 00:12:26.884 }, 00:12:26.884 { 00:12:26.884 "name": "BaseBdev2", 00:12:26.884 "uuid": "e1642623-8157-5700-b109-3a958a4491c8", 00:12:26.884 "is_configured": true, 00:12:26.884 "data_offset": 2048, 00:12:26.884 "data_size": 63488 00:12:26.884 }, 00:12:26.884 { 00:12:26.884 "name": "BaseBdev3", 00:12:26.884 "uuid": "a5243f62-79f7-5ba0-bc82-d8c5c840ab3a", 00:12:26.884 "is_configured": true, 00:12:26.884 "data_offset": 2048, 00:12:26.884 "data_size": 63488 00:12:26.884 } 00:12:26.884 ] 00:12:26.884 }' 00:12:26.884 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.884 08:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.451 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:27.451 08:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:27.451 [2024-11-27 08:44:24.035386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.391 "name": "raid_bdev1", 00:12:28.391 "uuid": "4fa0848a-62c7-4aba-874a-adeb963410db", 00:12:28.391 "strip_size_kb": 64, 00:12:28.391 "state": "online", 00:12:28.391 "raid_level": "concat", 00:12:28.391 "superblock": true, 00:12:28.391 "num_base_bdevs": 3, 00:12:28.391 "num_base_bdevs_discovered": 3, 00:12:28.391 "num_base_bdevs_operational": 3, 00:12:28.391 "base_bdevs_list": [ 00:12:28.391 { 00:12:28.391 "name": "BaseBdev1", 00:12:28.391 "uuid": "b2e9e5c8-8936-572f-b78c-12174dafd73c", 00:12:28.391 "is_configured": true, 00:12:28.391 "data_offset": 2048, 00:12:28.391 "data_size": 63488 00:12:28.391 }, 00:12:28.391 { 00:12:28.391 "name": "BaseBdev2", 00:12:28.391 "uuid": "e1642623-8157-5700-b109-3a958a4491c8", 00:12:28.391 "is_configured": true, 00:12:28.391 "data_offset": 2048, 00:12:28.391 "data_size": 63488 00:12:28.391 }, 00:12:28.391 { 00:12:28.391 "name": "BaseBdev3", 00:12:28.391 "uuid": "a5243f62-79f7-5ba0-bc82-d8c5c840ab3a", 00:12:28.391 "is_configured": true, 00:12:28.391 "data_offset": 2048, 00:12:28.391 "data_size": 63488 00:12:28.391 } 00:12:28.391 ] 00:12:28.391 }' 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.391 08:44:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.958 08:44:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:28.958 08:44:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.958 08:44:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.958 [2024-11-27 08:44:25.467475] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.958 [2024-11-27 08:44:25.467660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.958 [2024-11-27 08:44:25.471300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.958 [2024-11-27 08:44:25.471595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.958 [2024-11-27 08:44:25.471795] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:12:28.958 "results": [ 00:12:28.958 { 00:12:28.958 "job": "raid_bdev1", 00:12:28.958 "core_mask": "0x1", 00:12:28.958 "workload": "randrw", 00:12:28.958 "percentage": 50, 00:12:28.958 "status": "finished", 00:12:28.958 "queue_depth": 1, 00:12:28.958 "io_size": 131072, 00:12:28.958 "runtime": 1.42916, 00:12:28.958 "iops": 9279.576814352487, 00:12:28.958 "mibps": 1159.947101794061, 00:12:28.958 "io_failed": 1, 00:12:28.958 "io_timeout": 0, 00:12:28.958 "avg_latency_us": 152.1826061565668, 00:12:28.958 "min_latency_us": 40.96, 00:12:28.958 "max_latency_us": 1861.8181818181818 00:12:28.958 } 00:12:28.958 ], 00:12:28.958 "core_count": 1 00:12:28.958 } 00:12:28.958 ee all in destruct 00:12:28.958 [2024-11-27 08:44:25.471941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:28.958 08:44:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.958 08:44:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67432 00:12:28.958 08:44:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' -z 67432 ']' 00:12:28.958 08:44:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # kill -0 67432 00:12:28.958 08:44:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # uname 00:12:28.958 08:44:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:12:28.958 08:44:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 67432 00:12:28.958 killing process with pid 67432 00:12:28.958 08:44:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:12:28.958 08:44:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:12:28.958 08:44:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 67432' 00:12:28.958 08:44:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # kill 67432 00:12:28.958 08:44:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@975 -- # wait 67432 00:12:28.958 [2024-11-27 08:44:25.512170] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.216 [2024-11-27 08:44:25.747246] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:30.591 08:44:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Y0jtNLoAjw 00:12:30.591 08:44:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:30.591 08:44:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:30.591 08:44:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:12:30.591 08:44:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:30.591 08:44:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:30.591 08:44:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:30.591 ************************************ 00:12:30.591 END TEST raid_write_error_test 00:12:30.591 ************************************ 00:12:30.591 08:44:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:12:30.591 00:12:30.591 real 0m4.937s 00:12:30.591 user 0m6.040s 00:12:30.591 sys 0m0.662s 00:12:30.591 08:44:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:12:30.591 08:44:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.591 08:44:27 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:30.591 08:44:27 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:12:30.591 08:44:27 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:12:30.591 08:44:27 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:12:30.591 08:44:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:30.591 ************************************ 00:12:30.591 START TEST raid_state_function_test 00:12:30.591 ************************************ 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # raid_state_function_test raid1 3 false 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:30.591 Process raid pid: 67581 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67581 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67581' 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67581 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # '[' -z 67581 ']' 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:12:30.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:12:30.591 08:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.591 [2024-11-27 08:44:27.134036] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:12:30.591 [2024-11-27 08:44:27.134623] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.591 [2024-11-27 08:44:27.321188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.851 [2024-11-27 08:44:27.476493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.109 [2024-11-27 08:44:27.705296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.109 [2024-11-27 08:44:27.705380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.676 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:12:31.676 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@865 -- # return 0 00:12:31.676 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:31.676 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.676 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.676 [2024-11-27 08:44:28.135134] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.676 [2024-11-27 08:44:28.135215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.676 [2024-11-27 08:44:28.135250] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.676 [2024-11-27 08:44:28.135268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.676 [2024-11-27 08:44:28.135279] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.676 [2024-11-27 08:44:28.135294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.676 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.676 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:31.676 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.676 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.676 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.676 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.677 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.677 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.677 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.677 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.677 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.677 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.677 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.677 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.677 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.677 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.677 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.677 "name": "Existed_Raid", 00:12:31.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.677 "strip_size_kb": 0, 00:12:31.677 "state": "configuring", 00:12:31.677 "raid_level": "raid1", 00:12:31.677 "superblock": false, 00:12:31.677 "num_base_bdevs": 3, 00:12:31.677 "num_base_bdevs_discovered": 0, 00:12:31.677 "num_base_bdevs_operational": 3, 00:12:31.677 "base_bdevs_list": [ 00:12:31.677 { 00:12:31.677 "name": "BaseBdev1", 00:12:31.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.677 "is_configured": false, 00:12:31.677 "data_offset": 0, 00:12:31.677 "data_size": 0 00:12:31.677 }, 00:12:31.677 { 00:12:31.677 "name": "BaseBdev2", 00:12:31.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.677 "is_configured": false, 00:12:31.677 "data_offset": 0, 00:12:31.677 "data_size": 0 00:12:31.677 }, 00:12:31.677 { 00:12:31.677 "name": "BaseBdev3", 00:12:31.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.677 "is_configured": false, 00:12:31.677 "data_offset": 0, 00:12:31.677 "data_size": 0 00:12:31.677 } 00:12:31.677 ] 00:12:31.677 }' 00:12:31.677 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.677 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.936 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:31.936 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.936 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.936 [2024-11-27 08:44:28.679287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.936 [2024-11-27 08:44:28.679400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:31.936 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.936 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:31.936 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.936 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.936 [2024-11-27 08:44:28.687215] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.936 [2024-11-27 08:44:28.687289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.936 [2024-11-27 08:44:28.687305] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.936 [2024-11-27 08:44:28.687322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.936 [2024-11-27 08:44:28.687332] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.936 [2024-11-27 08:44:28.687385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.936 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.937 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:31.937 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.937 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.195 [2024-11-27 08:44:28.737266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.195 BaseBdev1 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.195 [ 00:12:32.195 { 00:12:32.195 "name": "BaseBdev1", 00:12:32.195 "aliases": [ 00:12:32.195 "738e230f-99f1-41c8-8e15-8194e6cb03ff" 00:12:32.195 ], 00:12:32.195 "product_name": "Malloc disk", 00:12:32.195 "block_size": 512, 00:12:32.195 "num_blocks": 65536, 00:12:32.195 "uuid": "738e230f-99f1-41c8-8e15-8194e6cb03ff", 00:12:32.195 "assigned_rate_limits": { 00:12:32.195 "rw_ios_per_sec": 0, 00:12:32.195 "rw_mbytes_per_sec": 0, 00:12:32.195 "r_mbytes_per_sec": 0, 00:12:32.195 "w_mbytes_per_sec": 0 00:12:32.195 }, 00:12:32.195 "claimed": true, 00:12:32.195 "claim_type": "exclusive_write", 00:12:32.195 "zoned": false, 00:12:32.195 "supported_io_types": { 00:12:32.195 "read": true, 00:12:32.195 "write": true, 00:12:32.195 "unmap": true, 00:12:32.195 "flush": true, 00:12:32.195 "reset": true, 00:12:32.195 "nvme_admin": false, 00:12:32.195 "nvme_io": false, 00:12:32.195 "nvme_io_md": false, 00:12:32.195 "write_zeroes": true, 00:12:32.195 "zcopy": true, 00:12:32.195 "get_zone_info": false, 00:12:32.195 "zone_management": false, 00:12:32.195 "zone_append": false, 00:12:32.195 "compare": false, 00:12:32.195 "compare_and_write": false, 00:12:32.195 "abort": true, 00:12:32.195 "seek_hole": false, 00:12:32.195 "seek_data": false, 00:12:32.195 "copy": true, 00:12:32.195 "nvme_iov_md": false 00:12:32.195 }, 00:12:32.195 "memory_domains": [ 00:12:32.195 { 00:12:32.195 "dma_device_id": "system", 00:12:32.195 "dma_device_type": 1 00:12:32.195 }, 00:12:32.195 { 00:12:32.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.195 "dma_device_type": 2 00:12:32.195 } 00:12:32.195 ], 00:12:32.195 "driver_specific": {} 00:12:32.195 } 00:12:32.195 ] 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.195 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.196 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.196 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.196 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.196 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.196 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.196 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.196 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.196 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.196 "name": "Existed_Raid", 00:12:32.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.196 "strip_size_kb": 0, 00:12:32.196 "state": "configuring", 00:12:32.196 "raid_level": "raid1", 00:12:32.196 "superblock": false, 00:12:32.196 "num_base_bdevs": 3, 00:12:32.196 "num_base_bdevs_discovered": 1, 00:12:32.196 "num_base_bdevs_operational": 3, 00:12:32.196 "base_bdevs_list": [ 00:12:32.196 { 00:12:32.196 "name": "BaseBdev1", 00:12:32.196 "uuid": "738e230f-99f1-41c8-8e15-8194e6cb03ff", 00:12:32.196 "is_configured": true, 00:12:32.196 "data_offset": 0, 00:12:32.196 "data_size": 65536 00:12:32.196 }, 00:12:32.196 { 00:12:32.196 "name": "BaseBdev2", 00:12:32.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.196 "is_configured": false, 00:12:32.196 "data_offset": 0, 00:12:32.196 "data_size": 0 00:12:32.196 }, 00:12:32.196 { 00:12:32.196 "name": "BaseBdev3", 00:12:32.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.196 "is_configured": false, 00:12:32.196 "data_offset": 0, 00:12:32.196 "data_size": 0 00:12:32.196 } 00:12:32.196 ] 00:12:32.196 }' 00:12:32.196 08:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.196 08:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.761 [2024-11-27 08:44:29.261518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:32.761 [2024-11-27 08:44:29.261630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.761 [2024-11-27 08:44:29.273604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.761 [2024-11-27 08:44:29.276678] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.761 [2024-11-27 08:44:29.276894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.761 [2024-11-27 08:44:29.277036] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:32.761 [2024-11-27 08:44:29.277103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.761 "name": "Existed_Raid", 00:12:32.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.761 "strip_size_kb": 0, 00:12:32.761 "state": "configuring", 00:12:32.761 "raid_level": "raid1", 00:12:32.761 "superblock": false, 00:12:32.761 "num_base_bdevs": 3, 00:12:32.761 "num_base_bdevs_discovered": 1, 00:12:32.761 "num_base_bdevs_operational": 3, 00:12:32.761 "base_bdevs_list": [ 00:12:32.761 { 00:12:32.761 "name": "BaseBdev1", 00:12:32.761 "uuid": "738e230f-99f1-41c8-8e15-8194e6cb03ff", 00:12:32.761 "is_configured": true, 00:12:32.761 "data_offset": 0, 00:12:32.761 "data_size": 65536 00:12:32.761 }, 00:12:32.761 { 00:12:32.761 "name": "BaseBdev2", 00:12:32.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.761 "is_configured": false, 00:12:32.761 "data_offset": 0, 00:12:32.761 "data_size": 0 00:12:32.761 }, 00:12:32.761 { 00:12:32.761 "name": "BaseBdev3", 00:12:32.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.761 "is_configured": false, 00:12:32.761 "data_offset": 0, 00:12:32.761 "data_size": 0 00:12:32.761 } 00:12:32.761 ] 00:12:32.761 }' 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.761 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.329 [2024-11-27 08:44:29.838923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.329 BaseBdev2 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.329 [ 00:12:33.329 { 00:12:33.329 "name": "BaseBdev2", 00:12:33.329 "aliases": [ 00:12:33.329 "4ed77a9a-0ebf-4833-96f8-58a071ef091f" 00:12:33.329 ], 00:12:33.329 "product_name": "Malloc disk", 00:12:33.329 "block_size": 512, 00:12:33.329 "num_blocks": 65536, 00:12:33.329 "uuid": "4ed77a9a-0ebf-4833-96f8-58a071ef091f", 00:12:33.329 "assigned_rate_limits": { 00:12:33.329 "rw_ios_per_sec": 0, 00:12:33.329 "rw_mbytes_per_sec": 0, 00:12:33.329 "r_mbytes_per_sec": 0, 00:12:33.329 "w_mbytes_per_sec": 0 00:12:33.329 }, 00:12:33.329 "claimed": true, 00:12:33.329 "claim_type": "exclusive_write", 00:12:33.329 "zoned": false, 00:12:33.329 "supported_io_types": { 00:12:33.329 "read": true, 00:12:33.329 "write": true, 00:12:33.329 "unmap": true, 00:12:33.329 "flush": true, 00:12:33.329 "reset": true, 00:12:33.329 "nvme_admin": false, 00:12:33.329 "nvme_io": false, 00:12:33.329 "nvme_io_md": false, 00:12:33.329 "write_zeroes": true, 00:12:33.329 "zcopy": true, 00:12:33.329 "get_zone_info": false, 00:12:33.329 "zone_management": false, 00:12:33.329 "zone_append": false, 00:12:33.329 "compare": false, 00:12:33.329 "compare_and_write": false, 00:12:33.329 "abort": true, 00:12:33.329 "seek_hole": false, 00:12:33.329 "seek_data": false, 00:12:33.329 "copy": true, 00:12:33.329 "nvme_iov_md": false 00:12:33.329 }, 00:12:33.329 "memory_domains": [ 00:12:33.329 { 00:12:33.329 "dma_device_id": "system", 00:12:33.329 "dma_device_type": 1 00:12:33.329 }, 00:12:33.329 { 00:12:33.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.329 "dma_device_type": 2 00:12:33.329 } 00:12:33.329 ], 00:12:33.329 "driver_specific": {} 00:12:33.329 } 00:12:33.329 ] 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.329 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.330 "name": "Existed_Raid", 00:12:33.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.330 "strip_size_kb": 0, 00:12:33.330 "state": "configuring", 00:12:33.330 "raid_level": "raid1", 00:12:33.330 "superblock": false, 00:12:33.330 "num_base_bdevs": 3, 00:12:33.330 "num_base_bdevs_discovered": 2, 00:12:33.330 "num_base_bdevs_operational": 3, 00:12:33.330 "base_bdevs_list": [ 00:12:33.330 { 00:12:33.330 "name": "BaseBdev1", 00:12:33.330 "uuid": "738e230f-99f1-41c8-8e15-8194e6cb03ff", 00:12:33.330 "is_configured": true, 00:12:33.330 "data_offset": 0, 00:12:33.330 "data_size": 65536 00:12:33.330 }, 00:12:33.330 { 00:12:33.330 "name": "BaseBdev2", 00:12:33.330 "uuid": "4ed77a9a-0ebf-4833-96f8-58a071ef091f", 00:12:33.330 "is_configured": true, 00:12:33.330 "data_offset": 0, 00:12:33.330 "data_size": 65536 00:12:33.330 }, 00:12:33.330 { 00:12:33.330 "name": "BaseBdev3", 00:12:33.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.330 "is_configured": false, 00:12:33.330 "data_offset": 0, 00:12:33.330 "data_size": 0 00:12:33.330 } 00:12:33.330 ] 00:12:33.330 }' 00:12:33.330 08:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.330 08:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.898 [2024-11-27 08:44:30.455774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.898 [2024-11-27 08:44:30.455884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:33.898 [2024-11-27 08:44:30.455909] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:33.898 [2024-11-27 08:44:30.456346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:33.898 [2024-11-27 08:44:30.456659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:33.898 [2024-11-27 08:44:30.456679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:33.898 [2024-11-27 08:44:30.457116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.898 BaseBdev3 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.898 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.898 [ 00:12:33.898 { 00:12:33.898 "name": "BaseBdev3", 00:12:33.898 "aliases": [ 00:12:33.898 "9f40bdcc-b3d6-4c35-a0a1-ffedefb65046" 00:12:33.898 ], 00:12:33.898 "product_name": "Malloc disk", 00:12:33.898 "block_size": 512, 00:12:33.898 "num_blocks": 65536, 00:12:33.898 "uuid": "9f40bdcc-b3d6-4c35-a0a1-ffedefb65046", 00:12:33.898 "assigned_rate_limits": { 00:12:33.898 "rw_ios_per_sec": 0, 00:12:33.898 "rw_mbytes_per_sec": 0, 00:12:33.898 "r_mbytes_per_sec": 0, 00:12:33.898 "w_mbytes_per_sec": 0 00:12:33.898 }, 00:12:33.898 "claimed": true, 00:12:33.898 "claim_type": "exclusive_write", 00:12:33.898 "zoned": false, 00:12:33.898 "supported_io_types": { 00:12:33.898 "read": true, 00:12:33.898 "write": true, 00:12:33.898 "unmap": true, 00:12:33.898 "flush": true, 00:12:33.898 "reset": true, 00:12:33.898 "nvme_admin": false, 00:12:33.898 "nvme_io": false, 00:12:33.898 "nvme_io_md": false, 00:12:33.898 "write_zeroes": true, 00:12:33.898 "zcopy": true, 00:12:33.898 "get_zone_info": false, 00:12:33.898 "zone_management": false, 00:12:33.898 "zone_append": false, 00:12:33.898 "compare": false, 00:12:33.898 "compare_and_write": false, 00:12:33.898 "abort": true, 00:12:33.898 "seek_hole": false, 00:12:33.898 "seek_data": false, 00:12:33.898 "copy": true, 00:12:33.898 "nvme_iov_md": false 00:12:33.898 }, 00:12:33.898 "memory_domains": [ 00:12:33.899 { 00:12:33.899 "dma_device_id": "system", 00:12:33.899 "dma_device_type": 1 00:12:33.899 }, 00:12:33.899 { 00:12:33.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.899 "dma_device_type": 2 00:12:33.899 } 00:12:33.899 ], 00:12:33.899 "driver_specific": {} 00:12:33.899 } 00:12:33.899 ] 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.899 "name": "Existed_Raid", 00:12:33.899 "uuid": "b1bbd6e7-ada4-4130-be0a-4474f28c05a2", 00:12:33.899 "strip_size_kb": 0, 00:12:33.899 "state": "online", 00:12:33.899 "raid_level": "raid1", 00:12:33.899 "superblock": false, 00:12:33.899 "num_base_bdevs": 3, 00:12:33.899 "num_base_bdevs_discovered": 3, 00:12:33.899 "num_base_bdevs_operational": 3, 00:12:33.899 "base_bdevs_list": [ 00:12:33.899 { 00:12:33.899 "name": "BaseBdev1", 00:12:33.899 "uuid": "738e230f-99f1-41c8-8e15-8194e6cb03ff", 00:12:33.899 "is_configured": true, 00:12:33.899 "data_offset": 0, 00:12:33.899 "data_size": 65536 00:12:33.899 }, 00:12:33.899 { 00:12:33.899 "name": "BaseBdev2", 00:12:33.899 "uuid": "4ed77a9a-0ebf-4833-96f8-58a071ef091f", 00:12:33.899 "is_configured": true, 00:12:33.899 "data_offset": 0, 00:12:33.899 "data_size": 65536 00:12:33.899 }, 00:12:33.899 { 00:12:33.899 "name": "BaseBdev3", 00:12:33.899 "uuid": "9f40bdcc-b3d6-4c35-a0a1-ffedefb65046", 00:12:33.899 "is_configured": true, 00:12:33.899 "data_offset": 0, 00:12:33.899 "data_size": 65536 00:12:33.899 } 00:12:33.899 ] 00:12:33.899 }' 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.899 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.468 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:34.468 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:34.468 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:34.468 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:34.468 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:34.468 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:34.468 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:34.468 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.468 08:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.468 08:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:34.468 [2024-11-27 08:44:30.984422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.468 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.468 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:34.468 "name": "Existed_Raid", 00:12:34.468 "aliases": [ 00:12:34.468 "b1bbd6e7-ada4-4130-be0a-4474f28c05a2" 00:12:34.468 ], 00:12:34.468 "product_name": "Raid Volume", 00:12:34.468 "block_size": 512, 00:12:34.468 "num_blocks": 65536, 00:12:34.468 "uuid": "b1bbd6e7-ada4-4130-be0a-4474f28c05a2", 00:12:34.468 "assigned_rate_limits": { 00:12:34.468 "rw_ios_per_sec": 0, 00:12:34.468 "rw_mbytes_per_sec": 0, 00:12:34.468 "r_mbytes_per_sec": 0, 00:12:34.468 "w_mbytes_per_sec": 0 00:12:34.468 }, 00:12:34.468 "claimed": false, 00:12:34.468 "zoned": false, 00:12:34.468 "supported_io_types": { 00:12:34.468 "read": true, 00:12:34.468 "write": true, 00:12:34.468 "unmap": false, 00:12:34.468 "flush": false, 00:12:34.468 "reset": true, 00:12:34.468 "nvme_admin": false, 00:12:34.468 "nvme_io": false, 00:12:34.468 "nvme_io_md": false, 00:12:34.468 "write_zeroes": true, 00:12:34.468 "zcopy": false, 00:12:34.468 "get_zone_info": false, 00:12:34.468 "zone_management": false, 00:12:34.468 "zone_append": false, 00:12:34.468 "compare": false, 00:12:34.468 "compare_and_write": false, 00:12:34.468 "abort": false, 00:12:34.468 "seek_hole": false, 00:12:34.468 "seek_data": false, 00:12:34.468 "copy": false, 00:12:34.468 "nvme_iov_md": false 00:12:34.468 }, 00:12:34.468 "memory_domains": [ 00:12:34.468 { 00:12:34.468 "dma_device_id": "system", 00:12:34.468 "dma_device_type": 1 00:12:34.468 }, 00:12:34.468 { 00:12:34.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.468 "dma_device_type": 2 00:12:34.468 }, 00:12:34.468 { 00:12:34.468 "dma_device_id": "system", 00:12:34.468 "dma_device_type": 1 00:12:34.468 }, 00:12:34.468 { 00:12:34.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.468 "dma_device_type": 2 00:12:34.468 }, 00:12:34.468 { 00:12:34.468 "dma_device_id": "system", 00:12:34.468 "dma_device_type": 1 00:12:34.468 }, 00:12:34.468 { 00:12:34.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.468 "dma_device_type": 2 00:12:34.468 } 00:12:34.468 ], 00:12:34.468 "driver_specific": { 00:12:34.468 "raid": { 00:12:34.468 "uuid": "b1bbd6e7-ada4-4130-be0a-4474f28c05a2", 00:12:34.468 "strip_size_kb": 0, 00:12:34.468 "state": "online", 00:12:34.468 "raid_level": "raid1", 00:12:34.468 "superblock": false, 00:12:34.468 "num_base_bdevs": 3, 00:12:34.468 "num_base_bdevs_discovered": 3, 00:12:34.468 "num_base_bdevs_operational": 3, 00:12:34.468 "base_bdevs_list": [ 00:12:34.468 { 00:12:34.468 "name": "BaseBdev1", 00:12:34.468 "uuid": "738e230f-99f1-41c8-8e15-8194e6cb03ff", 00:12:34.468 "is_configured": true, 00:12:34.468 "data_offset": 0, 00:12:34.468 "data_size": 65536 00:12:34.468 }, 00:12:34.468 { 00:12:34.468 "name": "BaseBdev2", 00:12:34.468 "uuid": "4ed77a9a-0ebf-4833-96f8-58a071ef091f", 00:12:34.468 "is_configured": true, 00:12:34.468 "data_offset": 0, 00:12:34.468 "data_size": 65536 00:12:34.468 }, 00:12:34.468 { 00:12:34.468 "name": "BaseBdev3", 00:12:34.468 "uuid": "9f40bdcc-b3d6-4c35-a0a1-ffedefb65046", 00:12:34.468 "is_configured": true, 00:12:34.468 "data_offset": 0, 00:12:34.468 "data_size": 65536 00:12:34.468 } 00:12:34.468 ] 00:12:34.468 } 00:12:34.468 } 00:12:34.468 }' 00:12:34.468 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:34.468 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:34.468 BaseBdev2 00:12:34.468 BaseBdev3' 00:12:34.468 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.468 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:34.468 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.468 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:34.468 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.468 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.468 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.468 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.469 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.469 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.469 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.469 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.469 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:34.469 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.469 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.469 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.728 [2024-11-27 08:44:31.312168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.728 "name": "Existed_Raid", 00:12:34.728 "uuid": "b1bbd6e7-ada4-4130-be0a-4474f28c05a2", 00:12:34.728 "strip_size_kb": 0, 00:12:34.728 "state": "online", 00:12:34.728 "raid_level": "raid1", 00:12:34.728 "superblock": false, 00:12:34.728 "num_base_bdevs": 3, 00:12:34.728 "num_base_bdevs_discovered": 2, 00:12:34.728 "num_base_bdevs_operational": 2, 00:12:34.728 "base_bdevs_list": [ 00:12:34.728 { 00:12:34.728 "name": null, 00:12:34.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.728 "is_configured": false, 00:12:34.728 "data_offset": 0, 00:12:34.728 "data_size": 65536 00:12:34.728 }, 00:12:34.728 { 00:12:34.728 "name": "BaseBdev2", 00:12:34.728 "uuid": "4ed77a9a-0ebf-4833-96f8-58a071ef091f", 00:12:34.728 "is_configured": true, 00:12:34.728 "data_offset": 0, 00:12:34.728 "data_size": 65536 00:12:34.728 }, 00:12:34.728 { 00:12:34.728 "name": "BaseBdev3", 00:12:34.728 "uuid": "9f40bdcc-b3d6-4c35-a0a1-ffedefb65046", 00:12:34.728 "is_configured": true, 00:12:34.728 "data_offset": 0, 00:12:34.728 "data_size": 65536 00:12:34.728 } 00:12:34.728 ] 00:12:34.728 }' 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.728 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.297 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:35.297 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.297 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.297 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.297 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.297 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.297 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.297 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.297 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.297 08:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:35.297 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.297 08:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.297 [2024-11-27 08:44:31.979093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.556 [2024-11-27 08:44:32.137797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:35.556 [2024-11-27 08:44:32.138209] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.556 [2024-11-27 08:44:32.233671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.556 [2024-11-27 08:44:32.233756] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.556 [2024-11-27 08:44:32.233779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.556 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.816 BaseBdev2 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.816 [ 00:12:35.816 { 00:12:35.816 "name": "BaseBdev2", 00:12:35.816 "aliases": [ 00:12:35.816 "38cb439a-3cd9-4c3c-92c4-cc676901a5c9" 00:12:35.816 ], 00:12:35.816 "product_name": "Malloc disk", 00:12:35.816 "block_size": 512, 00:12:35.816 "num_blocks": 65536, 00:12:35.816 "uuid": "38cb439a-3cd9-4c3c-92c4-cc676901a5c9", 00:12:35.816 "assigned_rate_limits": { 00:12:35.816 "rw_ios_per_sec": 0, 00:12:35.816 "rw_mbytes_per_sec": 0, 00:12:35.816 "r_mbytes_per_sec": 0, 00:12:35.816 "w_mbytes_per_sec": 0 00:12:35.816 }, 00:12:35.816 "claimed": false, 00:12:35.816 "zoned": false, 00:12:35.816 "supported_io_types": { 00:12:35.816 "read": true, 00:12:35.816 "write": true, 00:12:35.816 "unmap": true, 00:12:35.816 "flush": true, 00:12:35.816 "reset": true, 00:12:35.816 "nvme_admin": false, 00:12:35.816 "nvme_io": false, 00:12:35.816 "nvme_io_md": false, 00:12:35.816 "write_zeroes": true, 00:12:35.816 "zcopy": true, 00:12:35.816 "get_zone_info": false, 00:12:35.816 "zone_management": false, 00:12:35.816 "zone_append": false, 00:12:35.816 "compare": false, 00:12:35.816 "compare_and_write": false, 00:12:35.816 "abort": true, 00:12:35.816 "seek_hole": false, 00:12:35.816 "seek_data": false, 00:12:35.816 "copy": true, 00:12:35.816 "nvme_iov_md": false 00:12:35.816 }, 00:12:35.816 "memory_domains": [ 00:12:35.816 { 00:12:35.816 "dma_device_id": "system", 00:12:35.816 "dma_device_type": 1 00:12:35.816 }, 00:12:35.816 { 00:12:35.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.816 "dma_device_type": 2 00:12:35.816 } 00:12:35.816 ], 00:12:35.816 "driver_specific": {} 00:12:35.816 } 00:12:35.816 ] 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.816 BaseBdev3 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.816 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.816 [ 00:12:35.816 { 00:12:35.816 "name": "BaseBdev3", 00:12:35.816 "aliases": [ 00:12:35.816 "4d49843e-e722-4f75-a711-30f2a0145778" 00:12:35.816 ], 00:12:35.816 "product_name": "Malloc disk", 00:12:35.816 "block_size": 512, 00:12:35.816 "num_blocks": 65536, 00:12:35.817 "uuid": "4d49843e-e722-4f75-a711-30f2a0145778", 00:12:35.817 "assigned_rate_limits": { 00:12:35.817 "rw_ios_per_sec": 0, 00:12:35.817 "rw_mbytes_per_sec": 0, 00:12:35.817 "r_mbytes_per_sec": 0, 00:12:35.817 "w_mbytes_per_sec": 0 00:12:35.817 }, 00:12:35.817 "claimed": false, 00:12:35.817 "zoned": false, 00:12:35.817 "supported_io_types": { 00:12:35.817 "read": true, 00:12:35.817 "write": true, 00:12:35.817 "unmap": true, 00:12:35.817 "flush": true, 00:12:35.817 "reset": true, 00:12:35.817 "nvme_admin": false, 00:12:35.817 "nvme_io": false, 00:12:35.817 "nvme_io_md": false, 00:12:35.817 "write_zeroes": true, 00:12:35.817 "zcopy": true, 00:12:35.817 "get_zone_info": false, 00:12:35.817 "zone_management": false, 00:12:35.817 "zone_append": false, 00:12:35.817 "compare": false, 00:12:35.817 "compare_and_write": false, 00:12:35.817 "abort": true, 00:12:35.817 "seek_hole": false, 00:12:35.817 "seek_data": false, 00:12:35.817 "copy": true, 00:12:35.817 "nvme_iov_md": false 00:12:35.817 }, 00:12:35.817 "memory_domains": [ 00:12:35.817 { 00:12:35.817 "dma_device_id": "system", 00:12:35.817 "dma_device_type": 1 00:12:35.817 }, 00:12:35.817 { 00:12:35.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.817 "dma_device_type": 2 00:12:35.817 } 00:12:35.817 ], 00:12:35.817 "driver_specific": {} 00:12:35.817 } 00:12:35.817 ] 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.817 [2024-11-27 08:44:32.459364] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:35.817 [2024-11-27 08:44:32.459468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:35.817 [2024-11-27 08:44:32.459501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.817 [2024-11-27 08:44:32.462224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.817 "name": "Existed_Raid", 00:12:35.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.817 "strip_size_kb": 0, 00:12:35.817 "state": "configuring", 00:12:35.817 "raid_level": "raid1", 00:12:35.817 "superblock": false, 00:12:35.817 "num_base_bdevs": 3, 00:12:35.817 "num_base_bdevs_discovered": 2, 00:12:35.817 "num_base_bdevs_operational": 3, 00:12:35.817 "base_bdevs_list": [ 00:12:35.817 { 00:12:35.817 "name": "BaseBdev1", 00:12:35.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.817 "is_configured": false, 00:12:35.817 "data_offset": 0, 00:12:35.817 "data_size": 0 00:12:35.817 }, 00:12:35.817 { 00:12:35.817 "name": "BaseBdev2", 00:12:35.817 "uuid": "38cb439a-3cd9-4c3c-92c4-cc676901a5c9", 00:12:35.817 "is_configured": true, 00:12:35.817 "data_offset": 0, 00:12:35.817 "data_size": 65536 00:12:35.817 }, 00:12:35.817 { 00:12:35.817 "name": "BaseBdev3", 00:12:35.817 "uuid": "4d49843e-e722-4f75-a711-30f2a0145778", 00:12:35.817 "is_configured": true, 00:12:35.817 "data_offset": 0, 00:12:35.817 "data_size": 65536 00:12:35.817 } 00:12:35.817 ] 00:12:35.817 }' 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.817 08:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.386 [2024-11-27 08:44:33.047590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.386 "name": "Existed_Raid", 00:12:36.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.386 "strip_size_kb": 0, 00:12:36.386 "state": "configuring", 00:12:36.386 "raid_level": "raid1", 00:12:36.386 "superblock": false, 00:12:36.386 "num_base_bdevs": 3, 00:12:36.386 "num_base_bdevs_discovered": 1, 00:12:36.386 "num_base_bdevs_operational": 3, 00:12:36.386 "base_bdevs_list": [ 00:12:36.386 { 00:12:36.386 "name": "BaseBdev1", 00:12:36.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.386 "is_configured": false, 00:12:36.386 "data_offset": 0, 00:12:36.386 "data_size": 0 00:12:36.386 }, 00:12:36.386 { 00:12:36.386 "name": null, 00:12:36.386 "uuid": "38cb439a-3cd9-4c3c-92c4-cc676901a5c9", 00:12:36.386 "is_configured": false, 00:12:36.386 "data_offset": 0, 00:12:36.386 "data_size": 65536 00:12:36.386 }, 00:12:36.386 { 00:12:36.386 "name": "BaseBdev3", 00:12:36.386 "uuid": "4d49843e-e722-4f75-a711-30f2a0145778", 00:12:36.386 "is_configured": true, 00:12:36.386 "data_offset": 0, 00:12:36.386 "data_size": 65536 00:12:36.386 } 00:12:36.386 ] 00:12:36.386 }' 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.386 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.999 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.999 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.999 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.999 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:36.999 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.999 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:36.999 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:36.999 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.999 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.999 [2024-11-27 08:44:33.669767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.999 BaseBdev1 00:12:36.999 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.999 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.000 [ 00:12:37.000 { 00:12:37.000 "name": "BaseBdev1", 00:12:37.000 "aliases": [ 00:12:37.000 "59e2d7a9-09e4-4099-8a45-c5e05cd1e8f1" 00:12:37.000 ], 00:12:37.000 "product_name": "Malloc disk", 00:12:37.000 "block_size": 512, 00:12:37.000 "num_blocks": 65536, 00:12:37.000 "uuid": "59e2d7a9-09e4-4099-8a45-c5e05cd1e8f1", 00:12:37.000 "assigned_rate_limits": { 00:12:37.000 "rw_ios_per_sec": 0, 00:12:37.000 "rw_mbytes_per_sec": 0, 00:12:37.000 "r_mbytes_per_sec": 0, 00:12:37.000 "w_mbytes_per_sec": 0 00:12:37.000 }, 00:12:37.000 "claimed": true, 00:12:37.000 "claim_type": "exclusive_write", 00:12:37.000 "zoned": false, 00:12:37.000 "supported_io_types": { 00:12:37.000 "read": true, 00:12:37.000 "write": true, 00:12:37.000 "unmap": true, 00:12:37.000 "flush": true, 00:12:37.000 "reset": true, 00:12:37.000 "nvme_admin": false, 00:12:37.000 "nvme_io": false, 00:12:37.000 "nvme_io_md": false, 00:12:37.000 "write_zeroes": true, 00:12:37.000 "zcopy": true, 00:12:37.000 "get_zone_info": false, 00:12:37.000 "zone_management": false, 00:12:37.000 "zone_append": false, 00:12:37.000 "compare": false, 00:12:37.000 "compare_and_write": false, 00:12:37.000 "abort": true, 00:12:37.000 "seek_hole": false, 00:12:37.000 "seek_data": false, 00:12:37.000 "copy": true, 00:12:37.000 "nvme_iov_md": false 00:12:37.000 }, 00:12:37.000 "memory_domains": [ 00:12:37.000 { 00:12:37.000 "dma_device_id": "system", 00:12:37.000 "dma_device_type": 1 00:12:37.000 }, 00:12:37.000 { 00:12:37.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.000 "dma_device_type": 2 00:12:37.000 } 00:12:37.000 ], 00:12:37.000 "driver_specific": {} 00:12:37.000 } 00:12:37.000 ] 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.000 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.260 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.260 "name": "Existed_Raid", 00:12:37.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.260 "strip_size_kb": 0, 00:12:37.260 "state": "configuring", 00:12:37.260 "raid_level": "raid1", 00:12:37.260 "superblock": false, 00:12:37.260 "num_base_bdevs": 3, 00:12:37.260 "num_base_bdevs_discovered": 2, 00:12:37.260 "num_base_bdevs_operational": 3, 00:12:37.260 "base_bdevs_list": [ 00:12:37.260 { 00:12:37.260 "name": "BaseBdev1", 00:12:37.260 "uuid": "59e2d7a9-09e4-4099-8a45-c5e05cd1e8f1", 00:12:37.260 "is_configured": true, 00:12:37.260 "data_offset": 0, 00:12:37.260 "data_size": 65536 00:12:37.260 }, 00:12:37.260 { 00:12:37.260 "name": null, 00:12:37.260 "uuid": "38cb439a-3cd9-4c3c-92c4-cc676901a5c9", 00:12:37.260 "is_configured": false, 00:12:37.260 "data_offset": 0, 00:12:37.260 "data_size": 65536 00:12:37.260 }, 00:12:37.260 { 00:12:37.260 "name": "BaseBdev3", 00:12:37.260 "uuid": "4d49843e-e722-4f75-a711-30f2a0145778", 00:12:37.260 "is_configured": true, 00:12:37.260 "data_offset": 0, 00:12:37.260 "data_size": 65536 00:12:37.260 } 00:12:37.260 ] 00:12:37.260 }' 00:12:37.260 08:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.260 08:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.519 [2024-11-27 08:44:34.265993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.519 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.779 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.779 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.779 "name": "Existed_Raid", 00:12:37.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.779 "strip_size_kb": 0, 00:12:37.779 "state": "configuring", 00:12:37.779 "raid_level": "raid1", 00:12:37.779 "superblock": false, 00:12:37.779 "num_base_bdevs": 3, 00:12:37.779 "num_base_bdevs_discovered": 1, 00:12:37.779 "num_base_bdevs_operational": 3, 00:12:37.779 "base_bdevs_list": [ 00:12:37.779 { 00:12:37.779 "name": "BaseBdev1", 00:12:37.779 "uuid": "59e2d7a9-09e4-4099-8a45-c5e05cd1e8f1", 00:12:37.779 "is_configured": true, 00:12:37.779 "data_offset": 0, 00:12:37.779 "data_size": 65536 00:12:37.779 }, 00:12:37.779 { 00:12:37.779 "name": null, 00:12:37.779 "uuid": "38cb439a-3cd9-4c3c-92c4-cc676901a5c9", 00:12:37.779 "is_configured": false, 00:12:37.779 "data_offset": 0, 00:12:37.779 "data_size": 65536 00:12:37.779 }, 00:12:37.779 { 00:12:37.779 "name": null, 00:12:37.779 "uuid": "4d49843e-e722-4f75-a711-30f2a0145778", 00:12:37.779 "is_configured": false, 00:12:37.779 "data_offset": 0, 00:12:37.779 "data_size": 65536 00:12:37.779 } 00:12:37.779 ] 00:12:37.779 }' 00:12:37.779 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.779 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.349 [2024-11-27 08:44:34.870260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.349 "name": "Existed_Raid", 00:12:38.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.349 "strip_size_kb": 0, 00:12:38.349 "state": "configuring", 00:12:38.349 "raid_level": "raid1", 00:12:38.349 "superblock": false, 00:12:38.349 "num_base_bdevs": 3, 00:12:38.349 "num_base_bdevs_discovered": 2, 00:12:38.349 "num_base_bdevs_operational": 3, 00:12:38.349 "base_bdevs_list": [ 00:12:38.349 { 00:12:38.349 "name": "BaseBdev1", 00:12:38.349 "uuid": "59e2d7a9-09e4-4099-8a45-c5e05cd1e8f1", 00:12:38.349 "is_configured": true, 00:12:38.349 "data_offset": 0, 00:12:38.349 "data_size": 65536 00:12:38.349 }, 00:12:38.349 { 00:12:38.349 "name": null, 00:12:38.349 "uuid": "38cb439a-3cd9-4c3c-92c4-cc676901a5c9", 00:12:38.349 "is_configured": false, 00:12:38.349 "data_offset": 0, 00:12:38.349 "data_size": 65536 00:12:38.349 }, 00:12:38.349 { 00:12:38.349 "name": "BaseBdev3", 00:12:38.349 "uuid": "4d49843e-e722-4f75-a711-30f2a0145778", 00:12:38.349 "is_configured": true, 00:12:38.349 "data_offset": 0, 00:12:38.349 "data_size": 65536 00:12:38.349 } 00:12:38.349 ] 00:12:38.349 }' 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.349 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.918 [2024-11-27 08:44:35.458463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.918 "name": "Existed_Raid", 00:12:38.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.918 "strip_size_kb": 0, 00:12:38.918 "state": "configuring", 00:12:38.918 "raid_level": "raid1", 00:12:38.918 "superblock": false, 00:12:38.918 "num_base_bdevs": 3, 00:12:38.918 "num_base_bdevs_discovered": 1, 00:12:38.918 "num_base_bdevs_operational": 3, 00:12:38.918 "base_bdevs_list": [ 00:12:38.918 { 00:12:38.918 "name": null, 00:12:38.918 "uuid": "59e2d7a9-09e4-4099-8a45-c5e05cd1e8f1", 00:12:38.918 "is_configured": false, 00:12:38.918 "data_offset": 0, 00:12:38.918 "data_size": 65536 00:12:38.918 }, 00:12:38.918 { 00:12:38.918 "name": null, 00:12:38.918 "uuid": "38cb439a-3cd9-4c3c-92c4-cc676901a5c9", 00:12:38.918 "is_configured": false, 00:12:38.918 "data_offset": 0, 00:12:38.918 "data_size": 65536 00:12:38.918 }, 00:12:38.918 { 00:12:38.918 "name": "BaseBdev3", 00:12:38.918 "uuid": "4d49843e-e722-4f75-a711-30f2a0145778", 00:12:38.918 "is_configured": true, 00:12:38.918 "data_offset": 0, 00:12:38.918 "data_size": 65536 00:12:38.918 } 00:12:38.918 ] 00:12:38.918 }' 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.918 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.486 [2024-11-27 08:44:36.128533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.486 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.486 "name": "Existed_Raid", 00:12:39.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.486 "strip_size_kb": 0, 00:12:39.486 "state": "configuring", 00:12:39.486 "raid_level": "raid1", 00:12:39.486 "superblock": false, 00:12:39.486 "num_base_bdevs": 3, 00:12:39.486 "num_base_bdevs_discovered": 2, 00:12:39.486 "num_base_bdevs_operational": 3, 00:12:39.486 "base_bdevs_list": [ 00:12:39.486 { 00:12:39.486 "name": null, 00:12:39.486 "uuid": "59e2d7a9-09e4-4099-8a45-c5e05cd1e8f1", 00:12:39.486 "is_configured": false, 00:12:39.486 "data_offset": 0, 00:12:39.486 "data_size": 65536 00:12:39.486 }, 00:12:39.486 { 00:12:39.486 "name": "BaseBdev2", 00:12:39.486 "uuid": "38cb439a-3cd9-4c3c-92c4-cc676901a5c9", 00:12:39.486 "is_configured": true, 00:12:39.486 "data_offset": 0, 00:12:39.486 "data_size": 65536 00:12:39.486 }, 00:12:39.487 { 00:12:39.487 "name": "BaseBdev3", 00:12:39.487 "uuid": "4d49843e-e722-4f75-a711-30f2a0145778", 00:12:39.487 "is_configured": true, 00:12:39.487 "data_offset": 0, 00:12:39.487 "data_size": 65536 00:12:39.487 } 00:12:39.487 ] 00:12:39.487 }' 00:12:39.487 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.487 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 59e2d7a9-09e4-4099-8a45-c5e05cd1e8f1 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.054 [2024-11-27 08:44:36.754233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:40.054 [2024-11-27 08:44:36.754318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:40.054 [2024-11-27 08:44:36.754356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:40.054 [2024-11-27 08:44:36.754703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:40.054 [2024-11-27 08:44:36.754930] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:40.054 [2024-11-27 08:44:36.754954] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:40.054 [2024-11-27 08:44:36.755290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.054 NewBaseBdev 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.054 [ 00:12:40.054 { 00:12:40.054 "name": "NewBaseBdev", 00:12:40.054 "aliases": [ 00:12:40.054 "59e2d7a9-09e4-4099-8a45-c5e05cd1e8f1" 00:12:40.054 ], 00:12:40.054 "product_name": "Malloc disk", 00:12:40.054 "block_size": 512, 00:12:40.054 "num_blocks": 65536, 00:12:40.054 "uuid": "59e2d7a9-09e4-4099-8a45-c5e05cd1e8f1", 00:12:40.054 "assigned_rate_limits": { 00:12:40.054 "rw_ios_per_sec": 0, 00:12:40.054 "rw_mbytes_per_sec": 0, 00:12:40.054 "r_mbytes_per_sec": 0, 00:12:40.054 "w_mbytes_per_sec": 0 00:12:40.054 }, 00:12:40.054 "claimed": true, 00:12:40.054 "claim_type": "exclusive_write", 00:12:40.054 "zoned": false, 00:12:40.054 "supported_io_types": { 00:12:40.054 "read": true, 00:12:40.054 "write": true, 00:12:40.054 "unmap": true, 00:12:40.054 "flush": true, 00:12:40.054 "reset": true, 00:12:40.054 "nvme_admin": false, 00:12:40.054 "nvme_io": false, 00:12:40.054 "nvme_io_md": false, 00:12:40.054 "write_zeroes": true, 00:12:40.054 "zcopy": true, 00:12:40.054 "get_zone_info": false, 00:12:40.054 "zone_management": false, 00:12:40.054 "zone_append": false, 00:12:40.054 "compare": false, 00:12:40.054 "compare_and_write": false, 00:12:40.054 "abort": true, 00:12:40.054 "seek_hole": false, 00:12:40.054 "seek_data": false, 00:12:40.054 "copy": true, 00:12:40.054 "nvme_iov_md": false 00:12:40.054 }, 00:12:40.054 "memory_domains": [ 00:12:40.054 { 00:12:40.054 "dma_device_id": "system", 00:12:40.054 "dma_device_type": 1 00:12:40.054 }, 00:12:40.054 { 00:12:40.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.054 "dma_device_type": 2 00:12:40.054 } 00:12:40.054 ], 00:12:40.054 "driver_specific": {} 00:12:40.054 } 00:12:40.054 ] 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.054 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.312 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.312 "name": "Existed_Raid", 00:12:40.312 "uuid": "c93edac5-7b92-4a10-93bd-7ab6046eb8d1", 00:12:40.312 "strip_size_kb": 0, 00:12:40.312 "state": "online", 00:12:40.312 "raid_level": "raid1", 00:12:40.312 "superblock": false, 00:12:40.312 "num_base_bdevs": 3, 00:12:40.312 "num_base_bdevs_discovered": 3, 00:12:40.312 "num_base_bdevs_operational": 3, 00:12:40.312 "base_bdevs_list": [ 00:12:40.312 { 00:12:40.312 "name": "NewBaseBdev", 00:12:40.312 "uuid": "59e2d7a9-09e4-4099-8a45-c5e05cd1e8f1", 00:12:40.312 "is_configured": true, 00:12:40.312 "data_offset": 0, 00:12:40.312 "data_size": 65536 00:12:40.312 }, 00:12:40.312 { 00:12:40.312 "name": "BaseBdev2", 00:12:40.312 "uuid": "38cb439a-3cd9-4c3c-92c4-cc676901a5c9", 00:12:40.312 "is_configured": true, 00:12:40.312 "data_offset": 0, 00:12:40.312 "data_size": 65536 00:12:40.312 }, 00:12:40.312 { 00:12:40.312 "name": "BaseBdev3", 00:12:40.312 "uuid": "4d49843e-e722-4f75-a711-30f2a0145778", 00:12:40.312 "is_configured": true, 00:12:40.312 "data_offset": 0, 00:12:40.312 "data_size": 65536 00:12:40.312 } 00:12:40.312 ] 00:12:40.312 }' 00:12:40.312 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.313 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.571 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:40.571 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:40.571 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:40.571 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:40.571 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:40.571 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:40.571 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:40.571 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:40.571 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.571 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.571 [2024-11-27 08:44:37.310871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:40.831 "name": "Existed_Raid", 00:12:40.831 "aliases": [ 00:12:40.831 "c93edac5-7b92-4a10-93bd-7ab6046eb8d1" 00:12:40.831 ], 00:12:40.831 "product_name": "Raid Volume", 00:12:40.831 "block_size": 512, 00:12:40.831 "num_blocks": 65536, 00:12:40.831 "uuid": "c93edac5-7b92-4a10-93bd-7ab6046eb8d1", 00:12:40.831 "assigned_rate_limits": { 00:12:40.831 "rw_ios_per_sec": 0, 00:12:40.831 "rw_mbytes_per_sec": 0, 00:12:40.831 "r_mbytes_per_sec": 0, 00:12:40.831 "w_mbytes_per_sec": 0 00:12:40.831 }, 00:12:40.831 "claimed": false, 00:12:40.831 "zoned": false, 00:12:40.831 "supported_io_types": { 00:12:40.831 "read": true, 00:12:40.831 "write": true, 00:12:40.831 "unmap": false, 00:12:40.831 "flush": false, 00:12:40.831 "reset": true, 00:12:40.831 "nvme_admin": false, 00:12:40.831 "nvme_io": false, 00:12:40.831 "nvme_io_md": false, 00:12:40.831 "write_zeroes": true, 00:12:40.831 "zcopy": false, 00:12:40.831 "get_zone_info": false, 00:12:40.831 "zone_management": false, 00:12:40.831 "zone_append": false, 00:12:40.831 "compare": false, 00:12:40.831 "compare_and_write": false, 00:12:40.831 "abort": false, 00:12:40.831 "seek_hole": false, 00:12:40.831 "seek_data": false, 00:12:40.831 "copy": false, 00:12:40.831 "nvme_iov_md": false 00:12:40.831 }, 00:12:40.831 "memory_domains": [ 00:12:40.831 { 00:12:40.831 "dma_device_id": "system", 00:12:40.831 "dma_device_type": 1 00:12:40.831 }, 00:12:40.831 { 00:12:40.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.831 "dma_device_type": 2 00:12:40.831 }, 00:12:40.831 { 00:12:40.831 "dma_device_id": "system", 00:12:40.831 "dma_device_type": 1 00:12:40.831 }, 00:12:40.831 { 00:12:40.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.831 "dma_device_type": 2 00:12:40.831 }, 00:12:40.831 { 00:12:40.831 "dma_device_id": "system", 00:12:40.831 "dma_device_type": 1 00:12:40.831 }, 00:12:40.831 { 00:12:40.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.831 "dma_device_type": 2 00:12:40.831 } 00:12:40.831 ], 00:12:40.831 "driver_specific": { 00:12:40.831 "raid": { 00:12:40.831 "uuid": "c93edac5-7b92-4a10-93bd-7ab6046eb8d1", 00:12:40.831 "strip_size_kb": 0, 00:12:40.831 "state": "online", 00:12:40.831 "raid_level": "raid1", 00:12:40.831 "superblock": false, 00:12:40.831 "num_base_bdevs": 3, 00:12:40.831 "num_base_bdevs_discovered": 3, 00:12:40.831 "num_base_bdevs_operational": 3, 00:12:40.831 "base_bdevs_list": [ 00:12:40.831 { 00:12:40.831 "name": "NewBaseBdev", 00:12:40.831 "uuid": "59e2d7a9-09e4-4099-8a45-c5e05cd1e8f1", 00:12:40.831 "is_configured": true, 00:12:40.831 "data_offset": 0, 00:12:40.831 "data_size": 65536 00:12:40.831 }, 00:12:40.831 { 00:12:40.831 "name": "BaseBdev2", 00:12:40.831 "uuid": "38cb439a-3cd9-4c3c-92c4-cc676901a5c9", 00:12:40.831 "is_configured": true, 00:12:40.831 "data_offset": 0, 00:12:40.831 "data_size": 65536 00:12:40.831 }, 00:12:40.831 { 00:12:40.831 "name": "BaseBdev3", 00:12:40.831 "uuid": "4d49843e-e722-4f75-a711-30f2a0145778", 00:12:40.831 "is_configured": true, 00:12:40.831 "data_offset": 0, 00:12:40.831 "data_size": 65536 00:12:40.831 } 00:12:40.831 ] 00:12:40.831 } 00:12:40.831 } 00:12:40.831 }' 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:40.831 BaseBdev2 00:12:40.831 BaseBdev3' 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.831 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.093 [2024-11-27 08:44:37.610572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:41.093 [2024-11-27 08:44:37.610639] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.093 [2024-11-27 08:44:37.610777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.093 [2024-11-27 08:44:37.611205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.093 [2024-11-27 08:44:37.611226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67581 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' -z 67581 ']' 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # kill -0 67581 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # uname 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 67581 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:12:41.093 killing process with pid 67581 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 67581' 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # kill 67581 00:12:41.093 [2024-11-27 08:44:37.656989] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:41.093 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@975 -- # wait 67581 00:12:41.351 [2024-11-27 08:44:37.958401] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:42.726 00:12:42.726 real 0m12.075s 00:12:42.726 user 0m19.842s 00:12:42.726 sys 0m1.747s 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.726 ************************************ 00:12:42.726 END TEST raid_state_function_test 00:12:42.726 ************************************ 00:12:42.726 08:44:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:12:42.726 08:44:39 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:12:42.726 08:44:39 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:12:42.726 08:44:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.726 ************************************ 00:12:42.726 START TEST raid_state_function_test_sb 00:12:42.726 ************************************ 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # raid_state_function_test raid1 3 true 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68219 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:42.726 Process raid pid: 68219 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68219' 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68219 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # '[' -z 68219 ']' 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local max_retries=100 00:12:42.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # xtrace_disable 00:12:42.726 08:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.726 [2024-11-27 08:44:39.242731] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:12:42.726 [2024-11-27 08:44:39.242906] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.726 [2024-11-27 08:44:39.417838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.984 [2024-11-27 08:44:39.568132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.242 [2024-11-27 08:44:39.797442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.242 [2024-11-27 08:44:39.797516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@865 -- # return 0 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.500 [2024-11-27 08:44:40.233857] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.500 [2024-11-27 08:44:40.233927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.500 [2024-11-27 08:44:40.233946] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.500 [2024-11-27 08:44:40.233964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.500 [2024-11-27 08:44:40.233974] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:43.500 [2024-11-27 08:44:40.233990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.500 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.758 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.758 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.758 "name": "Existed_Raid", 00:12:43.758 "uuid": "b8aa07e8-dad1-4759-ae04-a8b89bf93172", 00:12:43.758 "strip_size_kb": 0, 00:12:43.758 "state": "configuring", 00:12:43.758 "raid_level": "raid1", 00:12:43.758 "superblock": true, 00:12:43.758 "num_base_bdevs": 3, 00:12:43.758 "num_base_bdevs_discovered": 0, 00:12:43.758 "num_base_bdevs_operational": 3, 00:12:43.758 "base_bdevs_list": [ 00:12:43.758 { 00:12:43.758 "name": "BaseBdev1", 00:12:43.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.758 "is_configured": false, 00:12:43.758 "data_offset": 0, 00:12:43.758 "data_size": 0 00:12:43.758 }, 00:12:43.758 { 00:12:43.758 "name": "BaseBdev2", 00:12:43.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.758 "is_configured": false, 00:12:43.758 "data_offset": 0, 00:12:43.758 "data_size": 0 00:12:43.758 }, 00:12:43.758 { 00:12:43.758 "name": "BaseBdev3", 00:12:43.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.758 "is_configured": false, 00:12:43.758 "data_offset": 0, 00:12:43.758 "data_size": 0 00:12:43.758 } 00:12:43.758 ] 00:12:43.758 }' 00:12:43.758 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.758 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.016 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:44.016 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.016 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.016 [2024-11-27 08:44:40.765942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.016 [2024-11-27 08:44:40.766000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:44.016 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.016 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:44.016 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.016 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.016 [2024-11-27 08:44:40.773905] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:44.016 [2024-11-27 08:44:40.773961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:44.016 [2024-11-27 08:44:40.773977] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.016 [2024-11-27 08:44:40.773994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.016 [2024-11-27 08:44:40.774004] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.016 [2024-11-27 08:44:40.774020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.275 [2024-11-27 08:44:40.822869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.275 BaseBdev1 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.275 [ 00:12:44.275 { 00:12:44.275 "name": "BaseBdev1", 00:12:44.275 "aliases": [ 00:12:44.275 "936850e6-7462-48f3-97a1-57211cba86d8" 00:12:44.275 ], 00:12:44.275 "product_name": "Malloc disk", 00:12:44.275 "block_size": 512, 00:12:44.275 "num_blocks": 65536, 00:12:44.275 "uuid": "936850e6-7462-48f3-97a1-57211cba86d8", 00:12:44.275 "assigned_rate_limits": { 00:12:44.275 "rw_ios_per_sec": 0, 00:12:44.275 "rw_mbytes_per_sec": 0, 00:12:44.275 "r_mbytes_per_sec": 0, 00:12:44.275 "w_mbytes_per_sec": 0 00:12:44.275 }, 00:12:44.275 "claimed": true, 00:12:44.275 "claim_type": "exclusive_write", 00:12:44.275 "zoned": false, 00:12:44.275 "supported_io_types": { 00:12:44.275 "read": true, 00:12:44.275 "write": true, 00:12:44.275 "unmap": true, 00:12:44.275 "flush": true, 00:12:44.275 "reset": true, 00:12:44.275 "nvme_admin": false, 00:12:44.275 "nvme_io": false, 00:12:44.275 "nvme_io_md": false, 00:12:44.275 "write_zeroes": true, 00:12:44.275 "zcopy": true, 00:12:44.275 "get_zone_info": false, 00:12:44.275 "zone_management": false, 00:12:44.275 "zone_append": false, 00:12:44.275 "compare": false, 00:12:44.275 "compare_and_write": false, 00:12:44.275 "abort": true, 00:12:44.275 "seek_hole": false, 00:12:44.275 "seek_data": false, 00:12:44.275 "copy": true, 00:12:44.275 "nvme_iov_md": false 00:12:44.275 }, 00:12:44.275 "memory_domains": [ 00:12:44.275 { 00:12:44.275 "dma_device_id": "system", 00:12:44.275 "dma_device_type": 1 00:12:44.275 }, 00:12:44.275 { 00:12:44.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.275 "dma_device_type": 2 00:12:44.275 } 00:12:44.275 ], 00:12:44.275 "driver_specific": {} 00:12:44.275 } 00:12:44.275 ] 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.275 "name": "Existed_Raid", 00:12:44.275 "uuid": "2d2e3a76-83b5-4290-aac6-d1f984a7f32b", 00:12:44.275 "strip_size_kb": 0, 00:12:44.275 "state": "configuring", 00:12:44.275 "raid_level": "raid1", 00:12:44.275 "superblock": true, 00:12:44.275 "num_base_bdevs": 3, 00:12:44.275 "num_base_bdevs_discovered": 1, 00:12:44.275 "num_base_bdevs_operational": 3, 00:12:44.275 "base_bdevs_list": [ 00:12:44.275 { 00:12:44.275 "name": "BaseBdev1", 00:12:44.275 "uuid": "936850e6-7462-48f3-97a1-57211cba86d8", 00:12:44.275 "is_configured": true, 00:12:44.275 "data_offset": 2048, 00:12:44.275 "data_size": 63488 00:12:44.275 }, 00:12:44.275 { 00:12:44.275 "name": "BaseBdev2", 00:12:44.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.275 "is_configured": false, 00:12:44.275 "data_offset": 0, 00:12:44.275 "data_size": 0 00:12:44.275 }, 00:12:44.275 { 00:12:44.275 "name": "BaseBdev3", 00:12:44.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.275 "is_configured": false, 00:12:44.275 "data_offset": 0, 00:12:44.275 "data_size": 0 00:12:44.275 } 00:12:44.275 ] 00:12:44.275 }' 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.275 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.840 [2024-11-27 08:44:41.351069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.840 [2024-11-27 08:44:41.351154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.840 [2024-11-27 08:44:41.359102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.840 [2024-11-27 08:44:41.361725] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.840 [2024-11-27 08:44:41.361782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.840 [2024-11-27 08:44:41.361801] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.840 [2024-11-27 08:44:41.361818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.840 "name": "Existed_Raid", 00:12:44.840 "uuid": "4f842050-79de-437d-9813-fc1fcd25878d", 00:12:44.840 "strip_size_kb": 0, 00:12:44.840 "state": "configuring", 00:12:44.840 "raid_level": "raid1", 00:12:44.840 "superblock": true, 00:12:44.840 "num_base_bdevs": 3, 00:12:44.840 "num_base_bdevs_discovered": 1, 00:12:44.840 "num_base_bdevs_operational": 3, 00:12:44.840 "base_bdevs_list": [ 00:12:44.840 { 00:12:44.840 "name": "BaseBdev1", 00:12:44.840 "uuid": "936850e6-7462-48f3-97a1-57211cba86d8", 00:12:44.840 "is_configured": true, 00:12:44.840 "data_offset": 2048, 00:12:44.840 "data_size": 63488 00:12:44.840 }, 00:12:44.840 { 00:12:44.840 "name": "BaseBdev2", 00:12:44.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.840 "is_configured": false, 00:12:44.840 "data_offset": 0, 00:12:44.840 "data_size": 0 00:12:44.840 }, 00:12:44.840 { 00:12:44.840 "name": "BaseBdev3", 00:12:44.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.840 "is_configured": false, 00:12:44.840 "data_offset": 0, 00:12:44.840 "data_size": 0 00:12:44.840 } 00:12:44.840 ] 00:12:44.840 }' 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.840 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.462 [2024-11-27 08:44:41.931143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.462 BaseBdev2 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.462 [ 00:12:45.462 { 00:12:45.462 "name": "BaseBdev2", 00:12:45.462 "aliases": [ 00:12:45.462 "3af64897-eeac-4700-8c18-83e3abcc4ddc" 00:12:45.462 ], 00:12:45.462 "product_name": "Malloc disk", 00:12:45.462 "block_size": 512, 00:12:45.462 "num_blocks": 65536, 00:12:45.462 "uuid": "3af64897-eeac-4700-8c18-83e3abcc4ddc", 00:12:45.462 "assigned_rate_limits": { 00:12:45.462 "rw_ios_per_sec": 0, 00:12:45.462 "rw_mbytes_per_sec": 0, 00:12:45.462 "r_mbytes_per_sec": 0, 00:12:45.462 "w_mbytes_per_sec": 0 00:12:45.462 }, 00:12:45.462 "claimed": true, 00:12:45.462 "claim_type": "exclusive_write", 00:12:45.462 "zoned": false, 00:12:45.462 "supported_io_types": { 00:12:45.462 "read": true, 00:12:45.462 "write": true, 00:12:45.462 "unmap": true, 00:12:45.462 "flush": true, 00:12:45.462 "reset": true, 00:12:45.462 "nvme_admin": false, 00:12:45.462 "nvme_io": false, 00:12:45.462 "nvme_io_md": false, 00:12:45.462 "write_zeroes": true, 00:12:45.462 "zcopy": true, 00:12:45.462 "get_zone_info": false, 00:12:45.462 "zone_management": false, 00:12:45.462 "zone_append": false, 00:12:45.462 "compare": false, 00:12:45.462 "compare_and_write": false, 00:12:45.462 "abort": true, 00:12:45.462 "seek_hole": false, 00:12:45.462 "seek_data": false, 00:12:45.462 "copy": true, 00:12:45.462 "nvme_iov_md": false 00:12:45.462 }, 00:12:45.462 "memory_domains": [ 00:12:45.462 { 00:12:45.462 "dma_device_id": "system", 00:12:45.462 "dma_device_type": 1 00:12:45.462 }, 00:12:45.462 { 00:12:45.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.462 "dma_device_type": 2 00:12:45.462 } 00:12:45.462 ], 00:12:45.462 "driver_specific": {} 00:12:45.462 } 00:12:45.462 ] 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.462 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.463 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.463 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.463 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.463 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.463 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.463 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.463 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.463 "name": "Existed_Raid", 00:12:45.463 "uuid": "4f842050-79de-437d-9813-fc1fcd25878d", 00:12:45.463 "strip_size_kb": 0, 00:12:45.463 "state": "configuring", 00:12:45.463 "raid_level": "raid1", 00:12:45.463 "superblock": true, 00:12:45.463 "num_base_bdevs": 3, 00:12:45.463 "num_base_bdevs_discovered": 2, 00:12:45.463 "num_base_bdevs_operational": 3, 00:12:45.463 "base_bdevs_list": [ 00:12:45.463 { 00:12:45.463 "name": "BaseBdev1", 00:12:45.463 "uuid": "936850e6-7462-48f3-97a1-57211cba86d8", 00:12:45.463 "is_configured": true, 00:12:45.463 "data_offset": 2048, 00:12:45.463 "data_size": 63488 00:12:45.463 }, 00:12:45.463 { 00:12:45.463 "name": "BaseBdev2", 00:12:45.463 "uuid": "3af64897-eeac-4700-8c18-83e3abcc4ddc", 00:12:45.463 "is_configured": true, 00:12:45.463 "data_offset": 2048, 00:12:45.463 "data_size": 63488 00:12:45.463 }, 00:12:45.463 { 00:12:45.463 "name": "BaseBdev3", 00:12:45.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.463 "is_configured": false, 00:12:45.463 "data_offset": 0, 00:12:45.463 "data_size": 0 00:12:45.463 } 00:12:45.463 ] 00:12:45.463 }' 00:12:45.463 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.463 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.051 [2024-11-27 08:44:42.548640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:46.051 [2024-11-27 08:44:42.549050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:46.051 [2024-11-27 08:44:42.549086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:46.051 [2024-11-27 08:44:42.549502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:46.051 BaseBdev3 00:12:46.051 [2024-11-27 08:44:42.549739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:46.051 [2024-11-27 08:44:42.549757] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:46.051 [2024-11-27 08:44:42.549955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.051 [ 00:12:46.051 { 00:12:46.051 "name": "BaseBdev3", 00:12:46.051 "aliases": [ 00:12:46.051 "c5e41c89-146e-4bdb-b248-ce07cd7bdce5" 00:12:46.051 ], 00:12:46.051 "product_name": "Malloc disk", 00:12:46.051 "block_size": 512, 00:12:46.051 "num_blocks": 65536, 00:12:46.051 "uuid": "c5e41c89-146e-4bdb-b248-ce07cd7bdce5", 00:12:46.051 "assigned_rate_limits": { 00:12:46.051 "rw_ios_per_sec": 0, 00:12:46.051 "rw_mbytes_per_sec": 0, 00:12:46.051 "r_mbytes_per_sec": 0, 00:12:46.051 "w_mbytes_per_sec": 0 00:12:46.051 }, 00:12:46.051 "claimed": true, 00:12:46.051 "claim_type": "exclusive_write", 00:12:46.051 "zoned": false, 00:12:46.051 "supported_io_types": { 00:12:46.051 "read": true, 00:12:46.051 "write": true, 00:12:46.051 "unmap": true, 00:12:46.051 "flush": true, 00:12:46.051 "reset": true, 00:12:46.051 "nvme_admin": false, 00:12:46.051 "nvme_io": false, 00:12:46.051 "nvme_io_md": false, 00:12:46.051 "write_zeroes": true, 00:12:46.051 "zcopy": true, 00:12:46.051 "get_zone_info": false, 00:12:46.051 "zone_management": false, 00:12:46.051 "zone_append": false, 00:12:46.051 "compare": false, 00:12:46.051 "compare_and_write": false, 00:12:46.051 "abort": true, 00:12:46.051 "seek_hole": false, 00:12:46.051 "seek_data": false, 00:12:46.051 "copy": true, 00:12:46.051 "nvme_iov_md": false 00:12:46.051 }, 00:12:46.051 "memory_domains": [ 00:12:46.051 { 00:12:46.051 "dma_device_id": "system", 00:12:46.051 "dma_device_type": 1 00:12:46.051 }, 00:12:46.051 { 00:12:46.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.051 "dma_device_type": 2 00:12:46.051 } 00:12:46.051 ], 00:12:46.051 "driver_specific": {} 00:12:46.051 } 00:12:46.051 ] 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.051 "name": "Existed_Raid", 00:12:46.051 "uuid": "4f842050-79de-437d-9813-fc1fcd25878d", 00:12:46.051 "strip_size_kb": 0, 00:12:46.051 "state": "online", 00:12:46.051 "raid_level": "raid1", 00:12:46.051 "superblock": true, 00:12:46.051 "num_base_bdevs": 3, 00:12:46.051 "num_base_bdevs_discovered": 3, 00:12:46.051 "num_base_bdevs_operational": 3, 00:12:46.051 "base_bdevs_list": [ 00:12:46.051 { 00:12:46.051 "name": "BaseBdev1", 00:12:46.051 "uuid": "936850e6-7462-48f3-97a1-57211cba86d8", 00:12:46.051 "is_configured": true, 00:12:46.051 "data_offset": 2048, 00:12:46.051 "data_size": 63488 00:12:46.051 }, 00:12:46.051 { 00:12:46.051 "name": "BaseBdev2", 00:12:46.051 "uuid": "3af64897-eeac-4700-8c18-83e3abcc4ddc", 00:12:46.051 "is_configured": true, 00:12:46.051 "data_offset": 2048, 00:12:46.051 "data_size": 63488 00:12:46.051 }, 00:12:46.051 { 00:12:46.051 "name": "BaseBdev3", 00:12:46.051 "uuid": "c5e41c89-146e-4bdb-b248-ce07cd7bdce5", 00:12:46.051 "is_configured": true, 00:12:46.051 "data_offset": 2048, 00:12:46.051 "data_size": 63488 00:12:46.051 } 00:12:46.051 ] 00:12:46.051 }' 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.051 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.615 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:46.615 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:46.615 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:46.615 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:46.615 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:46.615 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:46.615 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:46.615 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:46.615 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.615 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.615 [2024-11-27 08:44:43.085267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.615 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.615 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:46.615 "name": "Existed_Raid", 00:12:46.615 "aliases": [ 00:12:46.615 "4f842050-79de-437d-9813-fc1fcd25878d" 00:12:46.615 ], 00:12:46.616 "product_name": "Raid Volume", 00:12:46.616 "block_size": 512, 00:12:46.616 "num_blocks": 63488, 00:12:46.616 "uuid": "4f842050-79de-437d-9813-fc1fcd25878d", 00:12:46.616 "assigned_rate_limits": { 00:12:46.616 "rw_ios_per_sec": 0, 00:12:46.616 "rw_mbytes_per_sec": 0, 00:12:46.616 "r_mbytes_per_sec": 0, 00:12:46.616 "w_mbytes_per_sec": 0 00:12:46.616 }, 00:12:46.616 "claimed": false, 00:12:46.616 "zoned": false, 00:12:46.616 "supported_io_types": { 00:12:46.616 "read": true, 00:12:46.616 "write": true, 00:12:46.616 "unmap": false, 00:12:46.616 "flush": false, 00:12:46.616 "reset": true, 00:12:46.616 "nvme_admin": false, 00:12:46.616 "nvme_io": false, 00:12:46.616 "nvme_io_md": false, 00:12:46.616 "write_zeroes": true, 00:12:46.616 "zcopy": false, 00:12:46.616 "get_zone_info": false, 00:12:46.616 "zone_management": false, 00:12:46.616 "zone_append": false, 00:12:46.616 "compare": false, 00:12:46.616 "compare_and_write": false, 00:12:46.616 "abort": false, 00:12:46.616 "seek_hole": false, 00:12:46.616 "seek_data": false, 00:12:46.616 "copy": false, 00:12:46.616 "nvme_iov_md": false 00:12:46.616 }, 00:12:46.616 "memory_domains": [ 00:12:46.616 { 00:12:46.616 "dma_device_id": "system", 00:12:46.616 "dma_device_type": 1 00:12:46.616 }, 00:12:46.616 { 00:12:46.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.616 "dma_device_type": 2 00:12:46.616 }, 00:12:46.616 { 00:12:46.616 "dma_device_id": "system", 00:12:46.616 "dma_device_type": 1 00:12:46.616 }, 00:12:46.616 { 00:12:46.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.616 "dma_device_type": 2 00:12:46.616 }, 00:12:46.616 { 00:12:46.616 "dma_device_id": "system", 00:12:46.616 "dma_device_type": 1 00:12:46.616 }, 00:12:46.616 { 00:12:46.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.616 "dma_device_type": 2 00:12:46.616 } 00:12:46.616 ], 00:12:46.616 "driver_specific": { 00:12:46.616 "raid": { 00:12:46.616 "uuid": "4f842050-79de-437d-9813-fc1fcd25878d", 00:12:46.616 "strip_size_kb": 0, 00:12:46.616 "state": "online", 00:12:46.616 "raid_level": "raid1", 00:12:46.616 "superblock": true, 00:12:46.616 "num_base_bdevs": 3, 00:12:46.616 "num_base_bdevs_discovered": 3, 00:12:46.616 "num_base_bdevs_operational": 3, 00:12:46.616 "base_bdevs_list": [ 00:12:46.616 { 00:12:46.616 "name": "BaseBdev1", 00:12:46.616 "uuid": "936850e6-7462-48f3-97a1-57211cba86d8", 00:12:46.616 "is_configured": true, 00:12:46.616 "data_offset": 2048, 00:12:46.616 "data_size": 63488 00:12:46.616 }, 00:12:46.616 { 00:12:46.616 "name": "BaseBdev2", 00:12:46.616 "uuid": "3af64897-eeac-4700-8c18-83e3abcc4ddc", 00:12:46.616 "is_configured": true, 00:12:46.616 "data_offset": 2048, 00:12:46.616 "data_size": 63488 00:12:46.616 }, 00:12:46.616 { 00:12:46.616 "name": "BaseBdev3", 00:12:46.616 "uuid": "c5e41c89-146e-4bdb-b248-ce07cd7bdce5", 00:12:46.616 "is_configured": true, 00:12:46.616 "data_offset": 2048, 00:12:46.616 "data_size": 63488 00:12:46.616 } 00:12:46.616 ] 00:12:46.616 } 00:12:46.616 } 00:12:46.616 }' 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:46.616 BaseBdev2 00:12:46.616 BaseBdev3' 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.616 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.874 [2024-11-27 08:44:43.392992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:46.874 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.875 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.875 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.875 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.875 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.875 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.875 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.875 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.875 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.875 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.875 "name": "Existed_Raid", 00:12:46.875 "uuid": "4f842050-79de-437d-9813-fc1fcd25878d", 00:12:46.875 "strip_size_kb": 0, 00:12:46.875 "state": "online", 00:12:46.875 "raid_level": "raid1", 00:12:46.875 "superblock": true, 00:12:46.875 "num_base_bdevs": 3, 00:12:46.875 "num_base_bdevs_discovered": 2, 00:12:46.875 "num_base_bdevs_operational": 2, 00:12:46.875 "base_bdevs_list": [ 00:12:46.875 { 00:12:46.875 "name": null, 00:12:46.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.875 "is_configured": false, 00:12:46.875 "data_offset": 0, 00:12:46.875 "data_size": 63488 00:12:46.875 }, 00:12:46.875 { 00:12:46.875 "name": "BaseBdev2", 00:12:46.875 "uuid": "3af64897-eeac-4700-8c18-83e3abcc4ddc", 00:12:46.875 "is_configured": true, 00:12:46.875 "data_offset": 2048, 00:12:46.875 "data_size": 63488 00:12:46.875 }, 00:12:46.875 { 00:12:46.875 "name": "BaseBdev3", 00:12:46.875 "uuid": "c5e41c89-146e-4bdb-b248-ce07cd7bdce5", 00:12:46.875 "is_configured": true, 00:12:46.875 "data_offset": 2048, 00:12:46.875 "data_size": 63488 00:12:46.875 } 00:12:46.875 ] 00:12:46.875 }' 00:12:46.875 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.875 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.440 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:47.440 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:47.440 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:47.440 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.440 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.440 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.440 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.440 [2024-11-27 08:44:44.012409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.440 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.440 [2024-11-27 08:44:44.164025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:47.440 [2024-11-27 08:44:44.164200] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:47.698 [2024-11-27 08:44:44.256454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:47.698 [2024-11-27 08:44:44.256546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:47.698 [2024-11-27 08:44:44.256570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.698 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.698 BaseBdev2 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.699 [ 00:12:47.699 { 00:12:47.699 "name": "BaseBdev2", 00:12:47.699 "aliases": [ 00:12:47.699 "f7f53d96-9f8b-4e32-bc73-1eb8d7e07ca7" 00:12:47.699 ], 00:12:47.699 "product_name": "Malloc disk", 00:12:47.699 "block_size": 512, 00:12:47.699 "num_blocks": 65536, 00:12:47.699 "uuid": "f7f53d96-9f8b-4e32-bc73-1eb8d7e07ca7", 00:12:47.699 "assigned_rate_limits": { 00:12:47.699 "rw_ios_per_sec": 0, 00:12:47.699 "rw_mbytes_per_sec": 0, 00:12:47.699 "r_mbytes_per_sec": 0, 00:12:47.699 "w_mbytes_per_sec": 0 00:12:47.699 }, 00:12:47.699 "claimed": false, 00:12:47.699 "zoned": false, 00:12:47.699 "supported_io_types": { 00:12:47.699 "read": true, 00:12:47.699 "write": true, 00:12:47.699 "unmap": true, 00:12:47.699 "flush": true, 00:12:47.699 "reset": true, 00:12:47.699 "nvme_admin": false, 00:12:47.699 "nvme_io": false, 00:12:47.699 "nvme_io_md": false, 00:12:47.699 "write_zeroes": true, 00:12:47.699 "zcopy": true, 00:12:47.699 "get_zone_info": false, 00:12:47.699 "zone_management": false, 00:12:47.699 "zone_append": false, 00:12:47.699 "compare": false, 00:12:47.699 "compare_and_write": false, 00:12:47.699 "abort": true, 00:12:47.699 "seek_hole": false, 00:12:47.699 "seek_data": false, 00:12:47.699 "copy": true, 00:12:47.699 "nvme_iov_md": false 00:12:47.699 }, 00:12:47.699 "memory_domains": [ 00:12:47.699 { 00:12:47.699 "dma_device_id": "system", 00:12:47.699 "dma_device_type": 1 00:12:47.699 }, 00:12:47.699 { 00:12:47.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.699 "dma_device_type": 2 00:12:47.699 } 00:12:47.699 ], 00:12:47.699 "driver_specific": {} 00:12:47.699 } 00:12:47.699 ] 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.699 BaseBdev3 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.699 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.956 [ 00:12:47.956 { 00:12:47.956 "name": "BaseBdev3", 00:12:47.956 "aliases": [ 00:12:47.956 "2bac11b4-c5aa-44f3-bd63-a4c96b625679" 00:12:47.956 ], 00:12:47.956 "product_name": "Malloc disk", 00:12:47.956 "block_size": 512, 00:12:47.956 "num_blocks": 65536, 00:12:47.956 "uuid": "2bac11b4-c5aa-44f3-bd63-a4c96b625679", 00:12:47.956 "assigned_rate_limits": { 00:12:47.956 "rw_ios_per_sec": 0, 00:12:47.956 "rw_mbytes_per_sec": 0, 00:12:47.956 "r_mbytes_per_sec": 0, 00:12:47.956 "w_mbytes_per_sec": 0 00:12:47.956 }, 00:12:47.956 "claimed": false, 00:12:47.956 "zoned": false, 00:12:47.956 "supported_io_types": { 00:12:47.956 "read": true, 00:12:47.956 "write": true, 00:12:47.956 "unmap": true, 00:12:47.956 "flush": true, 00:12:47.956 "reset": true, 00:12:47.956 "nvme_admin": false, 00:12:47.956 "nvme_io": false, 00:12:47.956 "nvme_io_md": false, 00:12:47.956 "write_zeroes": true, 00:12:47.956 "zcopy": true, 00:12:47.956 "get_zone_info": false, 00:12:47.956 "zone_management": false, 00:12:47.956 "zone_append": false, 00:12:47.956 "compare": false, 00:12:47.956 "compare_and_write": false, 00:12:47.956 "abort": true, 00:12:47.956 "seek_hole": false, 00:12:47.956 "seek_data": false, 00:12:47.956 "copy": true, 00:12:47.956 "nvme_iov_md": false 00:12:47.956 }, 00:12:47.956 "memory_domains": [ 00:12:47.956 { 00:12:47.956 "dma_device_id": "system", 00:12:47.956 "dma_device_type": 1 00:12:47.956 }, 00:12:47.956 { 00:12:47.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.956 "dma_device_type": 2 00:12:47.956 } 00:12:47.956 ], 00:12:47.956 "driver_specific": {} 00:12:47.956 } 00:12:47.956 ] 00:12:47.956 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.956 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:12:47.956 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:47.956 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.956 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:47.956 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.956 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.956 [2024-11-27 08:44:44.469192] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.956 [2024-11-27 08:44:44.469274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.956 [2024-11-27 08:44:44.469306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.957 [2024-11-27 08:44:44.471987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.957 "name": "Existed_Raid", 00:12:47.957 "uuid": "259688e6-8fdf-4cc8-99b7-fb1e947b5783", 00:12:47.957 "strip_size_kb": 0, 00:12:47.957 "state": "configuring", 00:12:47.957 "raid_level": "raid1", 00:12:47.957 "superblock": true, 00:12:47.957 "num_base_bdevs": 3, 00:12:47.957 "num_base_bdevs_discovered": 2, 00:12:47.957 "num_base_bdevs_operational": 3, 00:12:47.957 "base_bdevs_list": [ 00:12:47.957 { 00:12:47.957 "name": "BaseBdev1", 00:12:47.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.957 "is_configured": false, 00:12:47.957 "data_offset": 0, 00:12:47.957 "data_size": 0 00:12:47.957 }, 00:12:47.957 { 00:12:47.957 "name": "BaseBdev2", 00:12:47.957 "uuid": "f7f53d96-9f8b-4e32-bc73-1eb8d7e07ca7", 00:12:47.957 "is_configured": true, 00:12:47.957 "data_offset": 2048, 00:12:47.957 "data_size": 63488 00:12:47.957 }, 00:12:47.957 { 00:12:47.957 "name": "BaseBdev3", 00:12:47.957 "uuid": "2bac11b4-c5aa-44f3-bd63-a4c96b625679", 00:12:47.957 "is_configured": true, 00:12:47.957 "data_offset": 2048, 00:12:47.957 "data_size": 63488 00:12:47.957 } 00:12:47.957 ] 00:12:47.957 }' 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.957 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.522 [2024-11-27 08:44:44.989399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.522 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.523 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.523 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.523 "name": "Existed_Raid", 00:12:48.523 "uuid": "259688e6-8fdf-4cc8-99b7-fb1e947b5783", 00:12:48.523 "strip_size_kb": 0, 00:12:48.523 "state": "configuring", 00:12:48.523 "raid_level": "raid1", 00:12:48.523 "superblock": true, 00:12:48.523 "num_base_bdevs": 3, 00:12:48.523 "num_base_bdevs_discovered": 1, 00:12:48.523 "num_base_bdevs_operational": 3, 00:12:48.523 "base_bdevs_list": [ 00:12:48.523 { 00:12:48.523 "name": "BaseBdev1", 00:12:48.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.523 "is_configured": false, 00:12:48.523 "data_offset": 0, 00:12:48.523 "data_size": 0 00:12:48.523 }, 00:12:48.523 { 00:12:48.523 "name": null, 00:12:48.523 "uuid": "f7f53d96-9f8b-4e32-bc73-1eb8d7e07ca7", 00:12:48.523 "is_configured": false, 00:12:48.523 "data_offset": 0, 00:12:48.523 "data_size": 63488 00:12:48.523 }, 00:12:48.523 { 00:12:48.523 "name": "BaseBdev3", 00:12:48.523 "uuid": "2bac11b4-c5aa-44f3-bd63-a4c96b625679", 00:12:48.523 "is_configured": true, 00:12:48.523 "data_offset": 2048, 00:12:48.523 "data_size": 63488 00:12:48.523 } 00:12:48.523 ] 00:12:48.523 }' 00:12:48.523 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.523 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.780 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.780 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:48.780 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.780 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.780 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.038 [2024-11-27 08:44:45.619713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.038 BaseBdev1 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.038 [ 00:12:49.038 { 00:12:49.038 "name": "BaseBdev1", 00:12:49.038 "aliases": [ 00:12:49.038 "49997c91-29ab-411d-ae8d-bdcc4217eaa5" 00:12:49.038 ], 00:12:49.038 "product_name": "Malloc disk", 00:12:49.038 "block_size": 512, 00:12:49.038 "num_blocks": 65536, 00:12:49.038 "uuid": "49997c91-29ab-411d-ae8d-bdcc4217eaa5", 00:12:49.038 "assigned_rate_limits": { 00:12:49.038 "rw_ios_per_sec": 0, 00:12:49.038 "rw_mbytes_per_sec": 0, 00:12:49.038 "r_mbytes_per_sec": 0, 00:12:49.038 "w_mbytes_per_sec": 0 00:12:49.038 }, 00:12:49.038 "claimed": true, 00:12:49.038 "claim_type": "exclusive_write", 00:12:49.038 "zoned": false, 00:12:49.038 "supported_io_types": { 00:12:49.038 "read": true, 00:12:49.038 "write": true, 00:12:49.038 "unmap": true, 00:12:49.038 "flush": true, 00:12:49.038 "reset": true, 00:12:49.038 "nvme_admin": false, 00:12:49.038 "nvme_io": false, 00:12:49.038 "nvme_io_md": false, 00:12:49.038 "write_zeroes": true, 00:12:49.038 "zcopy": true, 00:12:49.038 "get_zone_info": false, 00:12:49.038 "zone_management": false, 00:12:49.038 "zone_append": false, 00:12:49.038 "compare": false, 00:12:49.038 "compare_and_write": false, 00:12:49.038 "abort": true, 00:12:49.038 "seek_hole": false, 00:12:49.038 "seek_data": false, 00:12:49.038 "copy": true, 00:12:49.038 "nvme_iov_md": false 00:12:49.038 }, 00:12:49.038 "memory_domains": [ 00:12:49.038 { 00:12:49.038 "dma_device_id": "system", 00:12:49.038 "dma_device_type": 1 00:12:49.038 }, 00:12:49.038 { 00:12:49.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.038 "dma_device_type": 2 00:12:49.038 } 00:12:49.038 ], 00:12:49.038 "driver_specific": {} 00:12:49.038 } 00:12:49.038 ] 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.038 "name": "Existed_Raid", 00:12:49.038 "uuid": "259688e6-8fdf-4cc8-99b7-fb1e947b5783", 00:12:49.038 "strip_size_kb": 0, 00:12:49.038 "state": "configuring", 00:12:49.038 "raid_level": "raid1", 00:12:49.038 "superblock": true, 00:12:49.038 "num_base_bdevs": 3, 00:12:49.038 "num_base_bdevs_discovered": 2, 00:12:49.038 "num_base_bdevs_operational": 3, 00:12:49.038 "base_bdevs_list": [ 00:12:49.038 { 00:12:49.038 "name": "BaseBdev1", 00:12:49.038 "uuid": "49997c91-29ab-411d-ae8d-bdcc4217eaa5", 00:12:49.038 "is_configured": true, 00:12:49.038 "data_offset": 2048, 00:12:49.038 "data_size": 63488 00:12:49.038 }, 00:12:49.038 { 00:12:49.038 "name": null, 00:12:49.038 "uuid": "f7f53d96-9f8b-4e32-bc73-1eb8d7e07ca7", 00:12:49.038 "is_configured": false, 00:12:49.038 "data_offset": 0, 00:12:49.038 "data_size": 63488 00:12:49.038 }, 00:12:49.038 { 00:12:49.038 "name": "BaseBdev3", 00:12:49.038 "uuid": "2bac11b4-c5aa-44f3-bd63-a4c96b625679", 00:12:49.038 "is_configured": true, 00:12:49.038 "data_offset": 2048, 00:12:49.038 "data_size": 63488 00:12:49.038 } 00:12:49.038 ] 00:12:49.038 }' 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.038 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.606 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.606 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:49.606 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.607 [2024-11-27 08:44:46.267952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.607 "name": "Existed_Raid", 00:12:49.607 "uuid": "259688e6-8fdf-4cc8-99b7-fb1e947b5783", 00:12:49.607 "strip_size_kb": 0, 00:12:49.607 "state": "configuring", 00:12:49.607 "raid_level": "raid1", 00:12:49.607 "superblock": true, 00:12:49.607 "num_base_bdevs": 3, 00:12:49.607 "num_base_bdevs_discovered": 1, 00:12:49.607 "num_base_bdevs_operational": 3, 00:12:49.607 "base_bdevs_list": [ 00:12:49.607 { 00:12:49.607 "name": "BaseBdev1", 00:12:49.607 "uuid": "49997c91-29ab-411d-ae8d-bdcc4217eaa5", 00:12:49.607 "is_configured": true, 00:12:49.607 "data_offset": 2048, 00:12:49.607 "data_size": 63488 00:12:49.607 }, 00:12:49.607 { 00:12:49.607 "name": null, 00:12:49.607 "uuid": "f7f53d96-9f8b-4e32-bc73-1eb8d7e07ca7", 00:12:49.607 "is_configured": false, 00:12:49.607 "data_offset": 0, 00:12:49.607 "data_size": 63488 00:12:49.607 }, 00:12:49.607 { 00:12:49.607 "name": null, 00:12:49.607 "uuid": "2bac11b4-c5aa-44f3-bd63-a4c96b625679", 00:12:49.607 "is_configured": false, 00:12:49.607 "data_offset": 0, 00:12:49.607 "data_size": 63488 00:12:49.607 } 00:12:49.607 ] 00:12:49.607 }' 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.607 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.173 [2024-11-27 08:44:46.844167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.173 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.173 "name": "Existed_Raid", 00:12:50.173 "uuid": "259688e6-8fdf-4cc8-99b7-fb1e947b5783", 00:12:50.173 "strip_size_kb": 0, 00:12:50.173 "state": "configuring", 00:12:50.173 "raid_level": "raid1", 00:12:50.173 "superblock": true, 00:12:50.173 "num_base_bdevs": 3, 00:12:50.173 "num_base_bdevs_discovered": 2, 00:12:50.173 "num_base_bdevs_operational": 3, 00:12:50.173 "base_bdevs_list": [ 00:12:50.173 { 00:12:50.173 "name": "BaseBdev1", 00:12:50.173 "uuid": "49997c91-29ab-411d-ae8d-bdcc4217eaa5", 00:12:50.173 "is_configured": true, 00:12:50.173 "data_offset": 2048, 00:12:50.173 "data_size": 63488 00:12:50.173 }, 00:12:50.173 { 00:12:50.173 "name": null, 00:12:50.173 "uuid": "f7f53d96-9f8b-4e32-bc73-1eb8d7e07ca7", 00:12:50.173 "is_configured": false, 00:12:50.173 "data_offset": 0, 00:12:50.173 "data_size": 63488 00:12:50.173 }, 00:12:50.173 { 00:12:50.173 "name": "BaseBdev3", 00:12:50.173 "uuid": "2bac11b4-c5aa-44f3-bd63-a4c96b625679", 00:12:50.173 "is_configured": true, 00:12:50.174 "data_offset": 2048, 00:12:50.174 "data_size": 63488 00:12:50.174 } 00:12:50.174 ] 00:12:50.174 }' 00:12:50.174 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.174 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.739 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.739 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.739 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:50.739 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.739 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.739 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:50.739 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:50.739 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.739 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.739 [2024-11-27 08:44:47.404357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.997 "name": "Existed_Raid", 00:12:50.997 "uuid": "259688e6-8fdf-4cc8-99b7-fb1e947b5783", 00:12:50.997 "strip_size_kb": 0, 00:12:50.997 "state": "configuring", 00:12:50.997 "raid_level": "raid1", 00:12:50.997 "superblock": true, 00:12:50.997 "num_base_bdevs": 3, 00:12:50.997 "num_base_bdevs_discovered": 1, 00:12:50.997 "num_base_bdevs_operational": 3, 00:12:50.997 "base_bdevs_list": [ 00:12:50.997 { 00:12:50.997 "name": null, 00:12:50.997 "uuid": "49997c91-29ab-411d-ae8d-bdcc4217eaa5", 00:12:50.997 "is_configured": false, 00:12:50.997 "data_offset": 0, 00:12:50.997 "data_size": 63488 00:12:50.997 }, 00:12:50.997 { 00:12:50.997 "name": null, 00:12:50.997 "uuid": "f7f53d96-9f8b-4e32-bc73-1eb8d7e07ca7", 00:12:50.997 "is_configured": false, 00:12:50.997 "data_offset": 0, 00:12:50.997 "data_size": 63488 00:12:50.997 }, 00:12:50.997 { 00:12:50.997 "name": "BaseBdev3", 00:12:50.997 "uuid": "2bac11b4-c5aa-44f3-bd63-a4c96b625679", 00:12:50.997 "is_configured": true, 00:12:50.997 "data_offset": 2048, 00:12:50.997 "data_size": 63488 00:12:50.997 } 00:12:50.997 ] 00:12:50.997 }' 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.997 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.610 [2024-11-27 08:44:48.077137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.610 "name": "Existed_Raid", 00:12:51.610 "uuid": "259688e6-8fdf-4cc8-99b7-fb1e947b5783", 00:12:51.610 "strip_size_kb": 0, 00:12:51.610 "state": "configuring", 00:12:51.610 "raid_level": "raid1", 00:12:51.610 "superblock": true, 00:12:51.610 "num_base_bdevs": 3, 00:12:51.610 "num_base_bdevs_discovered": 2, 00:12:51.610 "num_base_bdevs_operational": 3, 00:12:51.610 "base_bdevs_list": [ 00:12:51.610 { 00:12:51.610 "name": null, 00:12:51.610 "uuid": "49997c91-29ab-411d-ae8d-bdcc4217eaa5", 00:12:51.610 "is_configured": false, 00:12:51.610 "data_offset": 0, 00:12:51.610 "data_size": 63488 00:12:51.610 }, 00:12:51.610 { 00:12:51.610 "name": "BaseBdev2", 00:12:51.610 "uuid": "f7f53d96-9f8b-4e32-bc73-1eb8d7e07ca7", 00:12:51.610 "is_configured": true, 00:12:51.610 "data_offset": 2048, 00:12:51.610 "data_size": 63488 00:12:51.610 }, 00:12:51.610 { 00:12:51.610 "name": "BaseBdev3", 00:12:51.610 "uuid": "2bac11b4-c5aa-44f3-bd63-a4c96b625679", 00:12:51.610 "is_configured": true, 00:12:51.610 "data_offset": 2048, 00:12:51.610 "data_size": 63488 00:12:51.610 } 00:12:51.610 ] 00:12:51.610 }' 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.610 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.868 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:51.869 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.869 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.869 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.869 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 49997c91-29ab-411d-ae8d-bdcc4217eaa5 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.127 [2024-11-27 08:44:48.755494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:52.127 [2024-11-27 08:44:48.755878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:52.127 [2024-11-27 08:44:48.755907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:52.127 [2024-11-27 08:44:48.756239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:52.127 NewBaseBdev 00:12:52.127 [2024-11-27 08:44:48.756489] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:52.127 [2024-11-27 08:44:48.756514] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:52.127 [2024-11-27 08:44:48.756694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:12:52.127 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.128 [ 00:12:52.128 { 00:12:52.128 "name": "NewBaseBdev", 00:12:52.128 "aliases": [ 00:12:52.128 "49997c91-29ab-411d-ae8d-bdcc4217eaa5" 00:12:52.128 ], 00:12:52.128 "product_name": "Malloc disk", 00:12:52.128 "block_size": 512, 00:12:52.128 "num_blocks": 65536, 00:12:52.128 "uuid": "49997c91-29ab-411d-ae8d-bdcc4217eaa5", 00:12:52.128 "assigned_rate_limits": { 00:12:52.128 "rw_ios_per_sec": 0, 00:12:52.128 "rw_mbytes_per_sec": 0, 00:12:52.128 "r_mbytes_per_sec": 0, 00:12:52.128 "w_mbytes_per_sec": 0 00:12:52.128 }, 00:12:52.128 "claimed": true, 00:12:52.128 "claim_type": "exclusive_write", 00:12:52.128 "zoned": false, 00:12:52.128 "supported_io_types": { 00:12:52.128 "read": true, 00:12:52.128 "write": true, 00:12:52.128 "unmap": true, 00:12:52.128 "flush": true, 00:12:52.128 "reset": true, 00:12:52.128 "nvme_admin": false, 00:12:52.128 "nvme_io": false, 00:12:52.128 "nvme_io_md": false, 00:12:52.128 "write_zeroes": true, 00:12:52.128 "zcopy": true, 00:12:52.128 "get_zone_info": false, 00:12:52.128 "zone_management": false, 00:12:52.128 "zone_append": false, 00:12:52.128 "compare": false, 00:12:52.128 "compare_and_write": false, 00:12:52.128 "abort": true, 00:12:52.128 "seek_hole": false, 00:12:52.128 "seek_data": false, 00:12:52.128 "copy": true, 00:12:52.128 "nvme_iov_md": false 00:12:52.128 }, 00:12:52.128 "memory_domains": [ 00:12:52.128 { 00:12:52.128 "dma_device_id": "system", 00:12:52.128 "dma_device_type": 1 00:12:52.128 }, 00:12:52.128 { 00:12:52.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.128 "dma_device_type": 2 00:12:52.128 } 00:12:52.128 ], 00:12:52.128 "driver_specific": {} 00:12:52.128 } 00:12:52.128 ] 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.128 "name": "Existed_Raid", 00:12:52.128 "uuid": "259688e6-8fdf-4cc8-99b7-fb1e947b5783", 00:12:52.128 "strip_size_kb": 0, 00:12:52.128 "state": "online", 00:12:52.128 "raid_level": "raid1", 00:12:52.128 "superblock": true, 00:12:52.128 "num_base_bdevs": 3, 00:12:52.128 "num_base_bdevs_discovered": 3, 00:12:52.128 "num_base_bdevs_operational": 3, 00:12:52.128 "base_bdevs_list": [ 00:12:52.128 { 00:12:52.128 "name": "NewBaseBdev", 00:12:52.128 "uuid": "49997c91-29ab-411d-ae8d-bdcc4217eaa5", 00:12:52.128 "is_configured": true, 00:12:52.128 "data_offset": 2048, 00:12:52.128 "data_size": 63488 00:12:52.128 }, 00:12:52.128 { 00:12:52.128 "name": "BaseBdev2", 00:12:52.128 "uuid": "f7f53d96-9f8b-4e32-bc73-1eb8d7e07ca7", 00:12:52.128 "is_configured": true, 00:12:52.128 "data_offset": 2048, 00:12:52.128 "data_size": 63488 00:12:52.128 }, 00:12:52.128 { 00:12:52.128 "name": "BaseBdev3", 00:12:52.128 "uuid": "2bac11b4-c5aa-44f3-bd63-a4c96b625679", 00:12:52.128 "is_configured": true, 00:12:52.128 "data_offset": 2048, 00:12:52.128 "data_size": 63488 00:12:52.128 } 00:12:52.128 ] 00:12:52.128 }' 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.128 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.700 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:52.700 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:52.700 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:52.700 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:52.700 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:52.700 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:52.700 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:52.700 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:52.700 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.700 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.700 [2024-11-27 08:44:49.372140] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.700 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.700 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:52.700 "name": "Existed_Raid", 00:12:52.700 "aliases": [ 00:12:52.700 "259688e6-8fdf-4cc8-99b7-fb1e947b5783" 00:12:52.700 ], 00:12:52.700 "product_name": "Raid Volume", 00:12:52.700 "block_size": 512, 00:12:52.700 "num_blocks": 63488, 00:12:52.700 "uuid": "259688e6-8fdf-4cc8-99b7-fb1e947b5783", 00:12:52.700 "assigned_rate_limits": { 00:12:52.700 "rw_ios_per_sec": 0, 00:12:52.700 "rw_mbytes_per_sec": 0, 00:12:52.700 "r_mbytes_per_sec": 0, 00:12:52.700 "w_mbytes_per_sec": 0 00:12:52.700 }, 00:12:52.700 "claimed": false, 00:12:52.700 "zoned": false, 00:12:52.700 "supported_io_types": { 00:12:52.700 "read": true, 00:12:52.700 "write": true, 00:12:52.700 "unmap": false, 00:12:52.700 "flush": false, 00:12:52.700 "reset": true, 00:12:52.700 "nvme_admin": false, 00:12:52.700 "nvme_io": false, 00:12:52.700 "nvme_io_md": false, 00:12:52.700 "write_zeroes": true, 00:12:52.700 "zcopy": false, 00:12:52.700 "get_zone_info": false, 00:12:52.700 "zone_management": false, 00:12:52.700 "zone_append": false, 00:12:52.700 "compare": false, 00:12:52.700 "compare_and_write": false, 00:12:52.701 "abort": false, 00:12:52.701 "seek_hole": false, 00:12:52.701 "seek_data": false, 00:12:52.701 "copy": false, 00:12:52.701 "nvme_iov_md": false 00:12:52.701 }, 00:12:52.701 "memory_domains": [ 00:12:52.701 { 00:12:52.701 "dma_device_id": "system", 00:12:52.701 "dma_device_type": 1 00:12:52.701 }, 00:12:52.701 { 00:12:52.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.701 "dma_device_type": 2 00:12:52.701 }, 00:12:52.701 { 00:12:52.701 "dma_device_id": "system", 00:12:52.701 "dma_device_type": 1 00:12:52.701 }, 00:12:52.701 { 00:12:52.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.701 "dma_device_type": 2 00:12:52.701 }, 00:12:52.701 { 00:12:52.701 "dma_device_id": "system", 00:12:52.701 "dma_device_type": 1 00:12:52.701 }, 00:12:52.701 { 00:12:52.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.701 "dma_device_type": 2 00:12:52.701 } 00:12:52.701 ], 00:12:52.701 "driver_specific": { 00:12:52.701 "raid": { 00:12:52.701 "uuid": "259688e6-8fdf-4cc8-99b7-fb1e947b5783", 00:12:52.701 "strip_size_kb": 0, 00:12:52.701 "state": "online", 00:12:52.701 "raid_level": "raid1", 00:12:52.701 "superblock": true, 00:12:52.701 "num_base_bdevs": 3, 00:12:52.701 "num_base_bdevs_discovered": 3, 00:12:52.701 "num_base_bdevs_operational": 3, 00:12:52.701 "base_bdevs_list": [ 00:12:52.701 { 00:12:52.701 "name": "NewBaseBdev", 00:12:52.701 "uuid": "49997c91-29ab-411d-ae8d-bdcc4217eaa5", 00:12:52.701 "is_configured": true, 00:12:52.701 "data_offset": 2048, 00:12:52.701 "data_size": 63488 00:12:52.701 }, 00:12:52.701 { 00:12:52.701 "name": "BaseBdev2", 00:12:52.701 "uuid": "f7f53d96-9f8b-4e32-bc73-1eb8d7e07ca7", 00:12:52.701 "is_configured": true, 00:12:52.702 "data_offset": 2048, 00:12:52.702 "data_size": 63488 00:12:52.702 }, 00:12:52.702 { 00:12:52.702 "name": "BaseBdev3", 00:12:52.702 "uuid": "2bac11b4-c5aa-44f3-bd63-a4c96b625679", 00:12:52.702 "is_configured": true, 00:12:52.702 "data_offset": 2048, 00:12:52.702 "data_size": 63488 00:12:52.702 } 00:12:52.702 ] 00:12:52.702 } 00:12:52.702 } 00:12:52.702 }' 00:12:52.702 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:52.962 BaseBdev2 00:12:52.962 BaseBdev3' 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.962 [2024-11-27 08:44:49.699839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.962 [2024-11-27 08:44:49.699897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.962 [2024-11-27 08:44:49.700019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.962 [2024-11-27 08:44:49.700461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.962 [2024-11-27 08:44:49.700492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68219 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' -z 68219 ']' 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # kill -0 68219 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # uname 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:12:52.962 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 68219 00:12:53.222 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:12:53.222 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:12:53.222 killing process with pid 68219 00:12:53.222 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # echo 'killing process with pid 68219' 00:12:53.222 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # kill 68219 00:12:53.222 [2024-11-27 08:44:49.742530] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:53.222 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@975 -- # wait 68219 00:12:53.481 [2024-11-27 08:44:50.045843] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:54.857 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:54.857 00:12:54.857 real 0m12.063s 00:12:54.857 user 0m19.869s 00:12:54.857 sys 0m1.711s 00:12:54.857 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # xtrace_disable 00:12:54.857 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.857 ************************************ 00:12:54.857 END TEST raid_state_function_test_sb 00:12:54.857 ************************************ 00:12:54.857 08:44:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:54.857 08:44:51 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:12:54.857 08:44:51 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:12:54.857 08:44:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:54.857 ************************************ 00:12:54.857 START TEST raid_superblock_test 00:12:54.857 ************************************ 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # raid_superblock_test raid1 3 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68852 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68852 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # '[' -z 68852 ']' 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:12:54.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.857 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:12:54.858 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.858 [2024-11-27 08:44:51.373193] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:12:54.858 [2024-11-27 08:44:51.373403] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68852 ] 00:12:54.858 [2024-11-27 08:44:51.551533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.117 [2024-11-27 08:44:51.708768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.376 [2024-11-27 08:44:51.931008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.376 [2024-11-27 08:44:51.931159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.635 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:12:55.635 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@865 -- # return 0 00:12:55.635 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:55.635 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:55.635 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:55.635 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:55.635 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:55.635 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:55.635 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:55.635 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:55.635 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:55.635 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.635 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.894 malloc1 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.894 [2024-11-27 08:44:52.402727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:55.894 [2024-11-27 08:44:52.402828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.894 [2024-11-27 08:44:52.402874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:55.894 [2024-11-27 08:44:52.402892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.894 [2024-11-27 08:44:52.406015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.894 [2024-11-27 08:44:52.406064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:55.894 pt1 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.894 malloc2 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.894 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.894 [2024-11-27 08:44:52.464284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:55.894 [2024-11-27 08:44:52.464384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.895 [2024-11-27 08:44:52.464421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:55.895 [2024-11-27 08:44:52.464438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.895 [2024-11-27 08:44:52.467526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.895 [2024-11-27 08:44:52.467577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:55.895 pt2 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.895 malloc3 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.895 [2024-11-27 08:44:52.537168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:55.895 [2024-11-27 08:44:52.537281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.895 [2024-11-27 08:44:52.537327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:55.895 [2024-11-27 08:44:52.537362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.895 [2024-11-27 08:44:52.540836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.895 [2024-11-27 08:44:52.540888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:55.895 pt3 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.895 [2024-11-27 08:44:52.549198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:55.895 [2024-11-27 08:44:52.552146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:55.895 [2024-11-27 08:44:52.552280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:55.895 [2024-11-27 08:44:52.552581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:55.895 [2024-11-27 08:44:52.552622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:55.895 [2024-11-27 08:44:52.553026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:55.895 [2024-11-27 08:44:52.553320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:55.895 [2024-11-27 08:44:52.553370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:55.895 [2024-11-27 08:44:52.553648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.895 "name": "raid_bdev1", 00:12:55.895 "uuid": "4e15e8f1-7c19-41cb-be43-0319573302cb", 00:12:55.895 "strip_size_kb": 0, 00:12:55.895 "state": "online", 00:12:55.895 "raid_level": "raid1", 00:12:55.895 "superblock": true, 00:12:55.895 "num_base_bdevs": 3, 00:12:55.895 "num_base_bdevs_discovered": 3, 00:12:55.895 "num_base_bdevs_operational": 3, 00:12:55.895 "base_bdevs_list": [ 00:12:55.895 { 00:12:55.895 "name": "pt1", 00:12:55.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.895 "is_configured": true, 00:12:55.895 "data_offset": 2048, 00:12:55.895 "data_size": 63488 00:12:55.895 }, 00:12:55.895 { 00:12:55.895 "name": "pt2", 00:12:55.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.895 "is_configured": true, 00:12:55.895 "data_offset": 2048, 00:12:55.895 "data_size": 63488 00:12:55.895 }, 00:12:55.895 { 00:12:55.895 "name": "pt3", 00:12:55.895 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.895 "is_configured": true, 00:12:55.895 "data_offset": 2048, 00:12:55.895 "data_size": 63488 00:12:55.895 } 00:12:55.895 ] 00:12:55.895 }' 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.895 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.463 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:56.463 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:56.463 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:56.463 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:56.463 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:56.463 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:56.463 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:56.463 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:56.463 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.463 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.463 [2024-11-27 08:44:53.102217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.463 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.463 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:56.463 "name": "raid_bdev1", 00:12:56.463 "aliases": [ 00:12:56.463 "4e15e8f1-7c19-41cb-be43-0319573302cb" 00:12:56.463 ], 00:12:56.463 "product_name": "Raid Volume", 00:12:56.463 "block_size": 512, 00:12:56.463 "num_blocks": 63488, 00:12:56.463 "uuid": "4e15e8f1-7c19-41cb-be43-0319573302cb", 00:12:56.463 "assigned_rate_limits": { 00:12:56.463 "rw_ios_per_sec": 0, 00:12:56.463 "rw_mbytes_per_sec": 0, 00:12:56.463 "r_mbytes_per_sec": 0, 00:12:56.463 "w_mbytes_per_sec": 0 00:12:56.463 }, 00:12:56.463 "claimed": false, 00:12:56.463 "zoned": false, 00:12:56.463 "supported_io_types": { 00:12:56.463 "read": true, 00:12:56.463 "write": true, 00:12:56.463 "unmap": false, 00:12:56.463 "flush": false, 00:12:56.463 "reset": true, 00:12:56.463 "nvme_admin": false, 00:12:56.463 "nvme_io": false, 00:12:56.464 "nvme_io_md": false, 00:12:56.464 "write_zeroes": true, 00:12:56.464 "zcopy": false, 00:12:56.464 "get_zone_info": false, 00:12:56.464 "zone_management": false, 00:12:56.464 "zone_append": false, 00:12:56.464 "compare": false, 00:12:56.464 "compare_and_write": false, 00:12:56.464 "abort": false, 00:12:56.464 "seek_hole": false, 00:12:56.464 "seek_data": false, 00:12:56.464 "copy": false, 00:12:56.464 "nvme_iov_md": false 00:12:56.464 }, 00:12:56.464 "memory_domains": [ 00:12:56.464 { 00:12:56.464 "dma_device_id": "system", 00:12:56.464 "dma_device_type": 1 00:12:56.464 }, 00:12:56.464 { 00:12:56.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.464 "dma_device_type": 2 00:12:56.464 }, 00:12:56.464 { 00:12:56.464 "dma_device_id": "system", 00:12:56.464 "dma_device_type": 1 00:12:56.464 }, 00:12:56.464 { 00:12:56.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.464 "dma_device_type": 2 00:12:56.464 }, 00:12:56.464 { 00:12:56.464 "dma_device_id": "system", 00:12:56.464 "dma_device_type": 1 00:12:56.464 }, 00:12:56.464 { 00:12:56.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.464 "dma_device_type": 2 00:12:56.464 } 00:12:56.464 ], 00:12:56.464 "driver_specific": { 00:12:56.464 "raid": { 00:12:56.464 "uuid": "4e15e8f1-7c19-41cb-be43-0319573302cb", 00:12:56.464 "strip_size_kb": 0, 00:12:56.464 "state": "online", 00:12:56.464 "raid_level": "raid1", 00:12:56.464 "superblock": true, 00:12:56.464 "num_base_bdevs": 3, 00:12:56.464 "num_base_bdevs_discovered": 3, 00:12:56.464 "num_base_bdevs_operational": 3, 00:12:56.464 "base_bdevs_list": [ 00:12:56.464 { 00:12:56.464 "name": "pt1", 00:12:56.464 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.464 "is_configured": true, 00:12:56.464 "data_offset": 2048, 00:12:56.464 "data_size": 63488 00:12:56.464 }, 00:12:56.464 { 00:12:56.464 "name": "pt2", 00:12:56.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.464 "is_configured": true, 00:12:56.464 "data_offset": 2048, 00:12:56.464 "data_size": 63488 00:12:56.464 }, 00:12:56.464 { 00:12:56.464 "name": "pt3", 00:12:56.464 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.464 "is_configured": true, 00:12:56.464 "data_offset": 2048, 00:12:56.464 "data_size": 63488 00:12:56.464 } 00:12:56.464 ] 00:12:56.464 } 00:12:56.464 } 00:12:56.464 }' 00:12:56.464 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:56.464 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:56.464 pt2 00:12:56.464 pt3' 00:12:56.464 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.722 [2024-11-27 08:44:53.406221] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4e15e8f1-7c19-41cb-be43-0319573302cb 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4e15e8f1-7c19-41cb-be43-0319573302cb ']' 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.722 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.722 [2024-11-27 08:44:53.453814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.722 [2024-11-27 08:44:53.453852] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.722 [2024-11-27 08:44:53.453961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.723 [2024-11-27 08:44:53.454091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.723 [2024-11-27 08:44:53.454108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:56.723 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.723 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.723 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.723 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.723 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:56.723 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.981 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:56.981 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:56.981 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:56.981 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:56.981 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.981 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.981 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.981 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:56.981 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:56.981 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.981 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.982 [2024-11-27 08:44:53.598052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:56.982 [2024-11-27 08:44:53.600935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:56.982 [2024-11-27 08:44:53.601021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:56.982 [2024-11-27 08:44:53.601162] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:56.982 [2024-11-27 08:44:53.601257] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:56.982 [2024-11-27 08:44:53.601294] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:56.982 [2024-11-27 08:44:53.601325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.982 [2024-11-27 08:44:53.601357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:56.982 request: 00:12:56.982 { 00:12:56.982 "name": "raid_bdev1", 00:12:56.982 "raid_level": "raid1", 00:12:56.982 "base_bdevs": [ 00:12:56.982 "malloc1", 00:12:56.982 "malloc2", 00:12:56.982 "malloc3" 00:12:56.982 ], 00:12:56.982 "superblock": false, 00:12:56.982 "method": "bdev_raid_create", 00:12:56.982 "req_id": 1 00:12:56.982 } 00:12:56.982 Got JSON-RPC error response 00:12:56.982 response: 00:12:56.982 { 00:12:56.982 "code": -17, 00:12:56.982 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:56.982 } 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.982 [2024-11-27 08:44:53.666025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:56.982 [2024-11-27 08:44:53.666159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.982 [2024-11-27 08:44:53.666207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:56.982 [2024-11-27 08:44:53.666226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.982 [2024-11-27 08:44:53.669603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.982 [2024-11-27 08:44:53.669651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:56.982 [2024-11-27 08:44:53.669790] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:56.982 [2024-11-27 08:44:53.669874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:56.982 pt1 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.982 "name": "raid_bdev1", 00:12:56.982 "uuid": "4e15e8f1-7c19-41cb-be43-0319573302cb", 00:12:56.982 "strip_size_kb": 0, 00:12:56.982 "state": "configuring", 00:12:56.982 "raid_level": "raid1", 00:12:56.982 "superblock": true, 00:12:56.982 "num_base_bdevs": 3, 00:12:56.982 "num_base_bdevs_discovered": 1, 00:12:56.982 "num_base_bdevs_operational": 3, 00:12:56.982 "base_bdevs_list": [ 00:12:56.982 { 00:12:56.982 "name": "pt1", 00:12:56.982 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.982 "is_configured": true, 00:12:56.982 "data_offset": 2048, 00:12:56.982 "data_size": 63488 00:12:56.982 }, 00:12:56.982 { 00:12:56.982 "name": null, 00:12:56.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.982 "is_configured": false, 00:12:56.982 "data_offset": 2048, 00:12:56.982 "data_size": 63488 00:12:56.982 }, 00:12:56.982 { 00:12:56.982 "name": null, 00:12:56.982 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.982 "is_configured": false, 00:12:56.982 "data_offset": 2048, 00:12:56.982 "data_size": 63488 00:12:56.982 } 00:12:56.982 ] 00:12:56.982 }' 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.982 08:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.550 [2024-11-27 08:44:54.198319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:57.550 [2024-11-27 08:44:54.198418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.550 [2024-11-27 08:44:54.198459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:57.550 [2024-11-27 08:44:54.198477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.550 [2024-11-27 08:44:54.199211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.550 [2024-11-27 08:44:54.199270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:57.550 [2024-11-27 08:44:54.199418] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:57.550 [2024-11-27 08:44:54.199468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:57.550 pt2 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.550 [2024-11-27 08:44:54.206272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.550 "name": "raid_bdev1", 00:12:57.550 "uuid": "4e15e8f1-7c19-41cb-be43-0319573302cb", 00:12:57.550 "strip_size_kb": 0, 00:12:57.550 "state": "configuring", 00:12:57.550 "raid_level": "raid1", 00:12:57.550 "superblock": true, 00:12:57.550 "num_base_bdevs": 3, 00:12:57.550 "num_base_bdevs_discovered": 1, 00:12:57.550 "num_base_bdevs_operational": 3, 00:12:57.550 "base_bdevs_list": [ 00:12:57.550 { 00:12:57.550 "name": "pt1", 00:12:57.550 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.550 "is_configured": true, 00:12:57.550 "data_offset": 2048, 00:12:57.550 "data_size": 63488 00:12:57.550 }, 00:12:57.550 { 00:12:57.550 "name": null, 00:12:57.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.550 "is_configured": false, 00:12:57.550 "data_offset": 0, 00:12:57.550 "data_size": 63488 00:12:57.550 }, 00:12:57.550 { 00:12:57.550 "name": null, 00:12:57.550 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.550 "is_configured": false, 00:12:57.550 "data_offset": 2048, 00:12:57.550 "data_size": 63488 00:12:57.550 } 00:12:57.550 ] 00:12:57.550 }' 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.550 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.118 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:58.118 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:58.118 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:58.118 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.118 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.118 [2024-11-27 08:44:54.718502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:58.118 [2024-11-27 08:44:54.718631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.118 [2024-11-27 08:44:54.718665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:58.118 [2024-11-27 08:44:54.718687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.118 [2024-11-27 08:44:54.719436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.118 [2024-11-27 08:44:54.719489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:58.118 [2024-11-27 08:44:54.719614] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:58.118 [2024-11-27 08:44:54.719678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:58.118 pt2 00:12:58.118 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.118 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:58.118 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:58.118 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:58.118 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.118 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.118 [2024-11-27 08:44:54.730477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:58.118 [2024-11-27 08:44:54.730560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.118 [2024-11-27 08:44:54.730596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:58.119 [2024-11-27 08:44:54.730620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.119 [2024-11-27 08:44:54.731243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.119 [2024-11-27 08:44:54.731291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:58.119 [2024-11-27 08:44:54.731415] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:58.119 [2024-11-27 08:44:54.731459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:58.119 [2024-11-27 08:44:54.731650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:58.119 [2024-11-27 08:44:54.731688] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:58.119 [2024-11-27 08:44:54.732031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:58.119 [2024-11-27 08:44:54.732264] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:58.119 [2024-11-27 08:44:54.732290] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:58.119 [2024-11-27 08:44:54.732509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.119 pt3 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.119 "name": "raid_bdev1", 00:12:58.119 "uuid": "4e15e8f1-7c19-41cb-be43-0319573302cb", 00:12:58.119 "strip_size_kb": 0, 00:12:58.119 "state": "online", 00:12:58.119 "raid_level": "raid1", 00:12:58.119 "superblock": true, 00:12:58.119 "num_base_bdevs": 3, 00:12:58.119 "num_base_bdevs_discovered": 3, 00:12:58.119 "num_base_bdevs_operational": 3, 00:12:58.119 "base_bdevs_list": [ 00:12:58.119 { 00:12:58.119 "name": "pt1", 00:12:58.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.119 "is_configured": true, 00:12:58.119 "data_offset": 2048, 00:12:58.119 "data_size": 63488 00:12:58.119 }, 00:12:58.119 { 00:12:58.119 "name": "pt2", 00:12:58.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.119 "is_configured": true, 00:12:58.119 "data_offset": 2048, 00:12:58.119 "data_size": 63488 00:12:58.119 }, 00:12:58.119 { 00:12:58.119 "name": "pt3", 00:12:58.119 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.119 "is_configured": true, 00:12:58.119 "data_offset": 2048, 00:12:58.119 "data_size": 63488 00:12:58.119 } 00:12:58.119 ] 00:12:58.119 }' 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.119 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:58.686 [2024-11-27 08:44:55.283029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:58.686 "name": "raid_bdev1", 00:12:58.686 "aliases": [ 00:12:58.686 "4e15e8f1-7c19-41cb-be43-0319573302cb" 00:12:58.686 ], 00:12:58.686 "product_name": "Raid Volume", 00:12:58.686 "block_size": 512, 00:12:58.686 "num_blocks": 63488, 00:12:58.686 "uuid": "4e15e8f1-7c19-41cb-be43-0319573302cb", 00:12:58.686 "assigned_rate_limits": { 00:12:58.686 "rw_ios_per_sec": 0, 00:12:58.686 "rw_mbytes_per_sec": 0, 00:12:58.686 "r_mbytes_per_sec": 0, 00:12:58.686 "w_mbytes_per_sec": 0 00:12:58.686 }, 00:12:58.686 "claimed": false, 00:12:58.686 "zoned": false, 00:12:58.686 "supported_io_types": { 00:12:58.686 "read": true, 00:12:58.686 "write": true, 00:12:58.686 "unmap": false, 00:12:58.686 "flush": false, 00:12:58.686 "reset": true, 00:12:58.686 "nvme_admin": false, 00:12:58.686 "nvme_io": false, 00:12:58.686 "nvme_io_md": false, 00:12:58.686 "write_zeroes": true, 00:12:58.686 "zcopy": false, 00:12:58.686 "get_zone_info": false, 00:12:58.686 "zone_management": false, 00:12:58.686 "zone_append": false, 00:12:58.686 "compare": false, 00:12:58.686 "compare_and_write": false, 00:12:58.686 "abort": false, 00:12:58.686 "seek_hole": false, 00:12:58.686 "seek_data": false, 00:12:58.686 "copy": false, 00:12:58.686 "nvme_iov_md": false 00:12:58.686 }, 00:12:58.686 "memory_domains": [ 00:12:58.686 { 00:12:58.686 "dma_device_id": "system", 00:12:58.686 "dma_device_type": 1 00:12:58.686 }, 00:12:58.686 { 00:12:58.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.686 "dma_device_type": 2 00:12:58.686 }, 00:12:58.686 { 00:12:58.686 "dma_device_id": "system", 00:12:58.686 "dma_device_type": 1 00:12:58.686 }, 00:12:58.686 { 00:12:58.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.686 "dma_device_type": 2 00:12:58.686 }, 00:12:58.686 { 00:12:58.686 "dma_device_id": "system", 00:12:58.686 "dma_device_type": 1 00:12:58.686 }, 00:12:58.686 { 00:12:58.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.686 "dma_device_type": 2 00:12:58.686 } 00:12:58.686 ], 00:12:58.686 "driver_specific": { 00:12:58.686 "raid": { 00:12:58.686 "uuid": "4e15e8f1-7c19-41cb-be43-0319573302cb", 00:12:58.686 "strip_size_kb": 0, 00:12:58.686 "state": "online", 00:12:58.686 "raid_level": "raid1", 00:12:58.686 "superblock": true, 00:12:58.686 "num_base_bdevs": 3, 00:12:58.686 "num_base_bdevs_discovered": 3, 00:12:58.686 "num_base_bdevs_operational": 3, 00:12:58.686 "base_bdevs_list": [ 00:12:58.686 { 00:12:58.686 "name": "pt1", 00:12:58.686 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.686 "is_configured": true, 00:12:58.686 "data_offset": 2048, 00:12:58.686 "data_size": 63488 00:12:58.686 }, 00:12:58.686 { 00:12:58.686 "name": "pt2", 00:12:58.686 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.686 "is_configured": true, 00:12:58.686 "data_offset": 2048, 00:12:58.686 "data_size": 63488 00:12:58.686 }, 00:12:58.686 { 00:12:58.686 "name": "pt3", 00:12:58.686 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.686 "is_configured": true, 00:12:58.686 "data_offset": 2048, 00:12:58.686 "data_size": 63488 00:12:58.686 } 00:12:58.686 ] 00:12:58.686 } 00:12:58.686 } 00:12:58.686 }' 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:58.686 pt2 00:12:58.686 pt3' 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.686 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.946 [2024-11-27 08:44:55.643093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4e15e8f1-7c19-41cb-be43-0319573302cb '!=' 4e15e8f1-7c19-41cb-be43-0319573302cb ']' 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.946 [2024-11-27 08:44:55.686861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.946 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.205 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.205 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.205 "name": "raid_bdev1", 00:12:59.205 "uuid": "4e15e8f1-7c19-41cb-be43-0319573302cb", 00:12:59.205 "strip_size_kb": 0, 00:12:59.205 "state": "online", 00:12:59.205 "raid_level": "raid1", 00:12:59.205 "superblock": true, 00:12:59.205 "num_base_bdevs": 3, 00:12:59.205 "num_base_bdevs_discovered": 2, 00:12:59.205 "num_base_bdevs_operational": 2, 00:12:59.205 "base_bdevs_list": [ 00:12:59.205 { 00:12:59.205 "name": null, 00:12:59.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.205 "is_configured": false, 00:12:59.205 "data_offset": 0, 00:12:59.205 "data_size": 63488 00:12:59.205 }, 00:12:59.205 { 00:12:59.205 "name": "pt2", 00:12:59.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.205 "is_configured": true, 00:12:59.205 "data_offset": 2048, 00:12:59.205 "data_size": 63488 00:12:59.205 }, 00:12:59.205 { 00:12:59.205 "name": "pt3", 00:12:59.205 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.205 "is_configured": true, 00:12:59.205 "data_offset": 2048, 00:12:59.205 "data_size": 63488 00:12:59.205 } 00:12:59.205 ] 00:12:59.205 }' 00:12:59.205 08:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.205 08:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.464 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:59.464 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.464 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.464 [2024-11-27 08:44:56.198877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:59.464 [2024-11-27 08:44:56.198921] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.464 [2024-11-27 08:44:56.199043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.464 [2024-11-27 08:44:56.199135] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.464 [2024-11-27 08:44:56.199170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:59.464 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.464 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.464 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:59.464 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.464 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.464 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.722 [2024-11-27 08:44:56.270824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:59.722 [2024-11-27 08:44:56.270906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.722 [2024-11-27 08:44:56.270934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:59.722 [2024-11-27 08:44:56.270954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.722 [2024-11-27 08:44:56.274138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.722 [2024-11-27 08:44:56.274190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:59.722 [2024-11-27 08:44:56.274301] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:59.722 [2024-11-27 08:44:56.274393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:59.722 pt2 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.722 "name": "raid_bdev1", 00:12:59.722 "uuid": "4e15e8f1-7c19-41cb-be43-0319573302cb", 00:12:59.722 "strip_size_kb": 0, 00:12:59.722 "state": "configuring", 00:12:59.722 "raid_level": "raid1", 00:12:59.722 "superblock": true, 00:12:59.722 "num_base_bdevs": 3, 00:12:59.722 "num_base_bdevs_discovered": 1, 00:12:59.722 "num_base_bdevs_operational": 2, 00:12:59.722 "base_bdevs_list": [ 00:12:59.722 { 00:12:59.722 "name": null, 00:12:59.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.722 "is_configured": false, 00:12:59.722 "data_offset": 2048, 00:12:59.722 "data_size": 63488 00:12:59.722 }, 00:12:59.722 { 00:12:59.722 "name": "pt2", 00:12:59.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.722 "is_configured": true, 00:12:59.722 "data_offset": 2048, 00:12:59.722 "data_size": 63488 00:12:59.722 }, 00:12:59.722 { 00:12:59.722 "name": null, 00:12:59.722 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.722 "is_configured": false, 00:12:59.722 "data_offset": 2048, 00:12:59.722 "data_size": 63488 00:12:59.722 } 00:12:59.722 ] 00:12:59.722 }' 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.722 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.290 [2024-11-27 08:44:56.747037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:00.290 [2024-11-27 08:44:56.747155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.290 [2024-11-27 08:44:56.747194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:00.290 [2024-11-27 08:44:56.747215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.290 [2024-11-27 08:44:56.747900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.290 [2024-11-27 08:44:56.747944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:00.290 [2024-11-27 08:44:56.748077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:00.290 [2024-11-27 08:44:56.748126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:00.290 [2024-11-27 08:44:56.748290] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:00.290 [2024-11-27 08:44:56.748325] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:00.290 [2024-11-27 08:44:56.748695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:00.290 [2024-11-27 08:44:56.748907] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:00.290 [2024-11-27 08:44:56.748928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:00.290 [2024-11-27 08:44:56.749122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.290 pt3 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.290 "name": "raid_bdev1", 00:13:00.290 "uuid": "4e15e8f1-7c19-41cb-be43-0319573302cb", 00:13:00.290 "strip_size_kb": 0, 00:13:00.290 "state": "online", 00:13:00.290 "raid_level": "raid1", 00:13:00.290 "superblock": true, 00:13:00.290 "num_base_bdevs": 3, 00:13:00.290 "num_base_bdevs_discovered": 2, 00:13:00.290 "num_base_bdevs_operational": 2, 00:13:00.290 "base_bdevs_list": [ 00:13:00.290 { 00:13:00.290 "name": null, 00:13:00.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.290 "is_configured": false, 00:13:00.290 "data_offset": 2048, 00:13:00.290 "data_size": 63488 00:13:00.290 }, 00:13:00.290 { 00:13:00.290 "name": "pt2", 00:13:00.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.290 "is_configured": true, 00:13:00.290 "data_offset": 2048, 00:13:00.290 "data_size": 63488 00:13:00.290 }, 00:13:00.290 { 00:13:00.290 "name": "pt3", 00:13:00.290 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.290 "is_configured": true, 00:13:00.290 "data_offset": 2048, 00:13:00.290 "data_size": 63488 00:13:00.290 } 00:13:00.290 ] 00:13:00.290 }' 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.290 08:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.600 [2024-11-27 08:44:57.219137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.600 [2024-11-27 08:44:57.219189] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:00.600 [2024-11-27 08:44:57.219310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.600 [2024-11-27 08:44:57.219430] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.600 [2024-11-27 08:44:57.219449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.600 [2024-11-27 08:44:57.279128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:00.600 [2024-11-27 08:44:57.279209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.600 [2024-11-27 08:44:57.279245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:00.600 [2024-11-27 08:44:57.279262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.600 [2024-11-27 08:44:57.282507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.600 [2024-11-27 08:44:57.282558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:00.600 [2024-11-27 08:44:57.282693] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:00.600 [2024-11-27 08:44:57.282761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:00.600 [2024-11-27 08:44:57.282944] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:00.600 [2024-11-27 08:44:57.282973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.600 [2024-11-27 08:44:57.283001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:00.600 [2024-11-27 08:44:57.283087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:00.600 pt1 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.600 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.601 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.601 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.601 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.601 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.601 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.601 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.601 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.601 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.601 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.880 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.880 "name": "raid_bdev1", 00:13:00.880 "uuid": "4e15e8f1-7c19-41cb-be43-0319573302cb", 00:13:00.880 "strip_size_kb": 0, 00:13:00.880 "state": "configuring", 00:13:00.880 "raid_level": "raid1", 00:13:00.880 "superblock": true, 00:13:00.880 "num_base_bdevs": 3, 00:13:00.880 "num_base_bdevs_discovered": 1, 00:13:00.880 "num_base_bdevs_operational": 2, 00:13:00.880 "base_bdevs_list": [ 00:13:00.880 { 00:13:00.880 "name": null, 00:13:00.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.880 "is_configured": false, 00:13:00.880 "data_offset": 2048, 00:13:00.880 "data_size": 63488 00:13:00.880 }, 00:13:00.880 { 00:13:00.880 "name": "pt2", 00:13:00.880 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.880 "is_configured": true, 00:13:00.880 "data_offset": 2048, 00:13:00.880 "data_size": 63488 00:13:00.880 }, 00:13:00.880 { 00:13:00.880 "name": null, 00:13:00.880 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.880 "is_configured": false, 00:13:00.880 "data_offset": 2048, 00:13:00.880 "data_size": 63488 00:13:00.880 } 00:13:00.880 ] 00:13:00.880 }' 00:13:00.880 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.880 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.138 [2024-11-27 08:44:57.859455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:01.138 [2024-11-27 08:44:57.859550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.138 [2024-11-27 08:44:57.859587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:01.138 [2024-11-27 08:44:57.859604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.138 [2024-11-27 08:44:57.860268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.138 [2024-11-27 08:44:57.860306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:01.138 [2024-11-27 08:44:57.860444] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:01.138 [2024-11-27 08:44:57.860514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:01.138 [2024-11-27 08:44:57.860691] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:01.138 [2024-11-27 08:44:57.860719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:01.138 [2024-11-27 08:44:57.861063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:01.138 [2024-11-27 08:44:57.861286] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:01.138 [2024-11-27 08:44:57.861319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:01.138 [2024-11-27 08:44:57.861528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.138 pt3 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.138 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.397 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.397 "name": "raid_bdev1", 00:13:01.397 "uuid": "4e15e8f1-7c19-41cb-be43-0319573302cb", 00:13:01.397 "strip_size_kb": 0, 00:13:01.397 "state": "online", 00:13:01.397 "raid_level": "raid1", 00:13:01.397 "superblock": true, 00:13:01.397 "num_base_bdevs": 3, 00:13:01.397 "num_base_bdevs_discovered": 2, 00:13:01.397 "num_base_bdevs_operational": 2, 00:13:01.397 "base_bdevs_list": [ 00:13:01.397 { 00:13:01.397 "name": null, 00:13:01.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.397 "is_configured": false, 00:13:01.397 "data_offset": 2048, 00:13:01.397 "data_size": 63488 00:13:01.397 }, 00:13:01.397 { 00:13:01.397 "name": "pt2", 00:13:01.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.397 "is_configured": true, 00:13:01.397 "data_offset": 2048, 00:13:01.397 "data_size": 63488 00:13:01.397 }, 00:13:01.397 { 00:13:01.397 "name": "pt3", 00:13:01.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.397 "is_configured": true, 00:13:01.397 "data_offset": 2048, 00:13:01.397 "data_size": 63488 00:13:01.397 } 00:13:01.397 ] 00:13:01.397 }' 00:13:01.397 08:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.397 08:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.655 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:01.655 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:01.655 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.655 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.655 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.913 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:01.913 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.913 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:01.913 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.913 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.913 [2024-11-27 08:44:58.459984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.913 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.914 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4e15e8f1-7c19-41cb-be43-0319573302cb '!=' 4e15e8f1-7c19-41cb-be43-0319573302cb ']' 00:13:01.914 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68852 00:13:01.914 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' -z 68852 ']' 00:13:01.914 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # kill -0 68852 00:13:01.914 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # uname 00:13:01.914 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:13:01.914 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 68852 00:13:01.914 killing process with pid 68852 00:13:01.914 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:13:01.914 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:13:01.914 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 68852' 00:13:01.914 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # kill 68852 00:13:01.914 [2024-11-27 08:44:58.534684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:01.914 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@975 -- # wait 68852 00:13:01.914 [2024-11-27 08:44:58.534836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.914 [2024-11-27 08:44:58.534929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.914 [2024-11-27 08:44:58.534950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:02.172 [2024-11-27 08:44:58.826182] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.542 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:03.542 00:13:03.542 real 0m8.693s 00:13:03.542 user 0m14.085s 00:13:03.542 sys 0m1.285s 00:13:03.542 ************************************ 00:13:03.542 END TEST raid_superblock_test 00:13:03.542 ************************************ 00:13:03.542 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:13:03.542 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.542 08:45:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:13:03.542 08:45:00 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:13:03.542 08:45:00 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:13:03.542 08:45:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.542 ************************************ 00:13:03.542 START TEST raid_read_error_test 00:13:03.542 ************************************ 00:13:03.542 08:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test raid1 3 read 00:13:03.542 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:03.542 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:03.542 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:03.542 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:03.542 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.542 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:03.542 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.542 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.542 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:03.542 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.542 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.542 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:03.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.koNrH7FFTl 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69308 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69308 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # '[' -z 69308 ']' 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:13:03.543 08:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.543 [2024-11-27 08:45:00.128976] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:13:03.543 [2024-11-27 08:45:00.129374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69308 ] 00:13:03.801 [2024-11-27 08:45:00.310832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.801 [2024-11-27 08:45:00.481550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.060 [2024-11-27 08:45:00.705328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.060 [2024-11-27 08:45:00.705707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@865 -- # return 0 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.630 BaseBdev1_malloc 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.630 true 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.630 [2024-11-27 08:45:01.175302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:04.630 [2024-11-27 08:45:01.175409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.630 [2024-11-27 08:45:01.175441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:04.630 [2024-11-27 08:45:01.175459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.630 [2024-11-27 08:45:01.178426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.630 [2024-11-27 08:45:01.178494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:04.630 BaseBdev1 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.630 BaseBdev2_malloc 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.630 true 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.630 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.630 [2024-11-27 08:45:01.246662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:04.630 [2024-11-27 08:45:01.246744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.630 [2024-11-27 08:45:01.246774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:04.630 [2024-11-27 08:45:01.246792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.630 [2024-11-27 08:45:01.249742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.631 [2024-11-27 08:45:01.249790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:04.631 BaseBdev2 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.631 BaseBdev3_malloc 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.631 true 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.631 [2024-11-27 08:45:01.330306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:04.631 [2024-11-27 08:45:01.330420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.631 [2024-11-27 08:45:01.330468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:04.631 [2024-11-27 08:45:01.330487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.631 [2024-11-27 08:45:01.333635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.631 [2024-11-27 08:45:01.333686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:04.631 BaseBdev3 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.631 [2024-11-27 08:45:01.342563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.631 [2024-11-27 08:45:01.345433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.631 [2024-11-27 08:45:01.345581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:04.631 [2024-11-27 08:45:01.345916] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:04.631 [2024-11-27 08:45:01.345946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:04.631 [2024-11-27 08:45:01.346390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:04.631 [2024-11-27 08:45:01.346690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:04.631 [2024-11-27 08:45:01.346723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:04.631 [2024-11-27 08:45:01.347068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.631 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.890 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.890 "name": "raid_bdev1", 00:13:04.890 "uuid": "32aea82c-de26-41fa-8a91-7788dd8ad309", 00:13:04.890 "strip_size_kb": 0, 00:13:04.890 "state": "online", 00:13:04.890 "raid_level": "raid1", 00:13:04.890 "superblock": true, 00:13:04.890 "num_base_bdevs": 3, 00:13:04.890 "num_base_bdevs_discovered": 3, 00:13:04.890 "num_base_bdevs_operational": 3, 00:13:04.890 "base_bdevs_list": [ 00:13:04.890 { 00:13:04.890 "name": "BaseBdev1", 00:13:04.890 "uuid": "adce8f17-e90d-5097-b490-70e1c8f62cae", 00:13:04.890 "is_configured": true, 00:13:04.890 "data_offset": 2048, 00:13:04.890 "data_size": 63488 00:13:04.890 }, 00:13:04.890 { 00:13:04.890 "name": "BaseBdev2", 00:13:04.890 "uuid": "928318a6-3bf8-5bd8-b6d1-187c52bc6c2c", 00:13:04.890 "is_configured": true, 00:13:04.890 "data_offset": 2048, 00:13:04.890 "data_size": 63488 00:13:04.890 }, 00:13:04.890 { 00:13:04.890 "name": "BaseBdev3", 00:13:04.890 "uuid": "0a35bef9-a848-5875-b76c-7e01c1fa7495", 00:13:04.890 "is_configured": true, 00:13:04.890 "data_offset": 2048, 00:13:04.890 "data_size": 63488 00:13:04.890 } 00:13:04.890 ] 00:13:04.890 }' 00:13:04.890 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.891 08:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.149 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:05.149 08:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:05.407 [2024-11-27 08:45:02.004658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:06.340 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:06.340 08:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.340 08:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 08:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.340 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:06.340 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:06.340 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.341 "name": "raid_bdev1", 00:13:06.341 "uuid": "32aea82c-de26-41fa-8a91-7788dd8ad309", 00:13:06.341 "strip_size_kb": 0, 00:13:06.341 "state": "online", 00:13:06.341 "raid_level": "raid1", 00:13:06.341 "superblock": true, 00:13:06.341 "num_base_bdevs": 3, 00:13:06.341 "num_base_bdevs_discovered": 3, 00:13:06.341 "num_base_bdevs_operational": 3, 00:13:06.341 "base_bdevs_list": [ 00:13:06.341 { 00:13:06.341 "name": "BaseBdev1", 00:13:06.341 "uuid": "adce8f17-e90d-5097-b490-70e1c8f62cae", 00:13:06.341 "is_configured": true, 00:13:06.341 "data_offset": 2048, 00:13:06.341 "data_size": 63488 00:13:06.341 }, 00:13:06.341 { 00:13:06.341 "name": "BaseBdev2", 00:13:06.341 "uuid": "928318a6-3bf8-5bd8-b6d1-187c52bc6c2c", 00:13:06.341 "is_configured": true, 00:13:06.341 "data_offset": 2048, 00:13:06.341 "data_size": 63488 00:13:06.341 }, 00:13:06.341 { 00:13:06.341 "name": "BaseBdev3", 00:13:06.341 "uuid": "0a35bef9-a848-5875-b76c-7e01c1fa7495", 00:13:06.341 "is_configured": true, 00:13:06.341 "data_offset": 2048, 00:13:06.341 "data_size": 63488 00:13:06.341 } 00:13:06.341 ] 00:13:06.341 }' 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.341 08:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.907 08:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:06.907 08:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.907 08:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.907 [2024-11-27 08:45:03.398147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.907 [2024-11-27 08:45:03.398192] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.907 [2024-11-27 08:45:03.401671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.907 [2024-11-27 08:45:03.401744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.907 [2024-11-27 08:45:03.401906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.907 [2024-11-27 08:45:03.401932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:06.907 { 00:13:06.907 "results": [ 00:13:06.907 { 00:13:06.907 "job": "raid_bdev1", 00:13:06.907 "core_mask": "0x1", 00:13:06.907 "workload": "randrw", 00:13:06.907 "percentage": 50, 00:13:06.907 "status": "finished", 00:13:06.907 "queue_depth": 1, 00:13:06.907 "io_size": 131072, 00:13:06.907 "runtime": 1.390877, 00:13:06.907 "iops": 8061.8199883958105, 00:13:06.907 "mibps": 1007.7274985494763, 00:13:06.907 "io_failed": 0, 00:13:06.907 "io_timeout": 0, 00:13:06.907 "avg_latency_us": 119.88328093203506, 00:13:06.907 "min_latency_us": 42.35636363636364, 00:13:06.907 "max_latency_us": 1899.0545454545454 00:13:06.907 } 00:13:06.907 ], 00:13:06.907 "core_count": 1 00:13:06.907 } 00:13:06.907 08:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.907 08:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69308 00:13:06.907 08:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' -z 69308 ']' 00:13:06.907 08:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # kill -0 69308 00:13:06.907 08:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # uname 00:13:06.907 08:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:13:06.907 08:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 69308 00:13:06.907 killing process with pid 69308 00:13:06.907 08:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:13:06.907 08:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:13:06.907 08:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 69308' 00:13:06.907 08:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # kill 69308 00:13:06.907 08:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@975 -- # wait 69308 00:13:06.907 [2024-11-27 08:45:03.437068] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.907 [2024-11-27 08:45:03.663098] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.281 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.koNrH7FFTl 00:13:08.281 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:08.281 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:08.281 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:08.281 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:08.281 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:08.281 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:08.281 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:08.281 00:13:08.281 real 0m4.857s 00:13:08.281 user 0m5.948s 00:13:08.281 sys 0m0.630s 00:13:08.281 08:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:13:08.281 ************************************ 00:13:08.281 08:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.281 END TEST raid_read_error_test 00:13:08.281 ************************************ 00:13:08.281 08:45:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:13:08.281 08:45:04 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:13:08.281 08:45:04 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:13:08.281 08:45:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.281 ************************************ 00:13:08.281 START TEST raid_write_error_test 00:13:08.281 ************************************ 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test raid1 3 write 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Erl9OqhEt9 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69455 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69455 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # '[' -z 69455 ']' 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:13:08.281 08:45:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.540 [2024-11-27 08:45:05.061452] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:13:08.540 [2024-11-27 08:45:05.061650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69455 ] 00:13:08.540 [2024-11-27 08:45:05.249440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.798 [2024-11-27 08:45:05.398699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.057 [2024-11-27 08:45:05.624022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.057 [2024-11-27 08:45:05.624074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.316 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:13:09.316 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@865 -- # return 0 00:13:09.316 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:09.316 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:09.316 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.316 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.575 BaseBdev1_malloc 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.575 true 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.575 [2024-11-27 08:45:06.089656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:09.575 [2024-11-27 08:45:06.089738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.575 [2024-11-27 08:45:06.089768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:09.575 [2024-11-27 08:45:06.089787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.575 [2024-11-27 08:45:06.092751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.575 [2024-11-27 08:45:06.092804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:09.575 BaseBdev1 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.575 BaseBdev2_malloc 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.575 true 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.575 [2024-11-27 08:45:06.154806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:09.575 [2024-11-27 08:45:06.154883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.575 [2024-11-27 08:45:06.154910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:09.575 [2024-11-27 08:45:06.154928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.575 [2024-11-27 08:45:06.158105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.575 [2024-11-27 08:45:06.158180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:09.575 BaseBdev2 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.575 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.575 BaseBdev3_malloc 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.576 true 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.576 [2024-11-27 08:45:06.231890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:09.576 [2024-11-27 08:45:06.231964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.576 [2024-11-27 08:45:06.231991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:09.576 [2024-11-27 08:45:06.232010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.576 [2024-11-27 08:45:06.235015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.576 [2024-11-27 08:45:06.235069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:09.576 BaseBdev3 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.576 [2024-11-27 08:45:06.240020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.576 [2024-11-27 08:45:06.242738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.576 [2024-11-27 08:45:06.242850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:09.576 [2024-11-27 08:45:06.243132] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:09.576 [2024-11-27 08:45:06.243161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:09.576 [2024-11-27 08:45:06.243506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:09.576 [2024-11-27 08:45:06.243756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:09.576 [2024-11-27 08:45:06.243788] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:09.576 [2024-11-27 08:45:06.244027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.576 "name": "raid_bdev1", 00:13:09.576 "uuid": "2365076a-2171-4bb6-a6f1-671894cabc36", 00:13:09.576 "strip_size_kb": 0, 00:13:09.576 "state": "online", 00:13:09.576 "raid_level": "raid1", 00:13:09.576 "superblock": true, 00:13:09.576 "num_base_bdevs": 3, 00:13:09.576 "num_base_bdevs_discovered": 3, 00:13:09.576 "num_base_bdevs_operational": 3, 00:13:09.576 "base_bdevs_list": [ 00:13:09.576 { 00:13:09.576 "name": "BaseBdev1", 00:13:09.576 "uuid": "dd7edaed-d3f8-5567-88e2-724181b31a10", 00:13:09.576 "is_configured": true, 00:13:09.576 "data_offset": 2048, 00:13:09.576 "data_size": 63488 00:13:09.576 }, 00:13:09.576 { 00:13:09.576 "name": "BaseBdev2", 00:13:09.576 "uuid": "152cc834-5fd2-58f7-9afe-0033edd0dd02", 00:13:09.576 "is_configured": true, 00:13:09.576 "data_offset": 2048, 00:13:09.576 "data_size": 63488 00:13:09.576 }, 00:13:09.576 { 00:13:09.576 "name": "BaseBdev3", 00:13:09.576 "uuid": "037efcd4-b2d4-5ffe-a83a-c855748356a3", 00:13:09.576 "is_configured": true, 00:13:09.576 "data_offset": 2048, 00:13:09.576 "data_size": 63488 00:13:09.576 } 00:13:09.576 ] 00:13:09.576 }' 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.576 08:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.142 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:10.143 08:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:10.400 [2024-11-27 08:45:06.937885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.337 [2024-11-27 08:45:07.821921] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:11.337 [2024-11-27 08:45:07.822002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:11.337 [2024-11-27 08:45:07.822278] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.337 "name": "raid_bdev1", 00:13:11.337 "uuid": "2365076a-2171-4bb6-a6f1-671894cabc36", 00:13:11.337 "strip_size_kb": 0, 00:13:11.337 "state": "online", 00:13:11.337 "raid_level": "raid1", 00:13:11.337 "superblock": true, 00:13:11.337 "num_base_bdevs": 3, 00:13:11.337 "num_base_bdevs_discovered": 2, 00:13:11.337 "num_base_bdevs_operational": 2, 00:13:11.337 "base_bdevs_list": [ 00:13:11.337 { 00:13:11.337 "name": null, 00:13:11.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.337 "is_configured": false, 00:13:11.337 "data_offset": 0, 00:13:11.337 "data_size": 63488 00:13:11.337 }, 00:13:11.337 { 00:13:11.337 "name": "BaseBdev2", 00:13:11.337 "uuid": "152cc834-5fd2-58f7-9afe-0033edd0dd02", 00:13:11.337 "is_configured": true, 00:13:11.337 "data_offset": 2048, 00:13:11.337 "data_size": 63488 00:13:11.337 }, 00:13:11.337 { 00:13:11.337 "name": "BaseBdev3", 00:13:11.337 "uuid": "037efcd4-b2d4-5ffe-a83a-c855748356a3", 00:13:11.337 "is_configured": true, 00:13:11.337 "data_offset": 2048, 00:13:11.337 "data_size": 63488 00:13:11.337 } 00:13:11.337 ] 00:13:11.337 }' 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.337 08:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.595 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:11.595 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.595 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.595 [2024-11-27 08:45:08.348011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.595 [2024-11-27 08:45:08.348062] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.595 [2024-11-27 08:45:08.351497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.595 [2024-11-27 08:45:08.351579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.595 [2024-11-27 08:45:08.351702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.595 [2024-11-27 08:45:08.351723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:11.595 { 00:13:11.595 "results": [ 00:13:11.595 { 00:13:11.595 "job": "raid_bdev1", 00:13:11.596 "core_mask": "0x1", 00:13:11.596 "workload": "randrw", 00:13:11.596 "percentage": 50, 00:13:11.596 "status": "finished", 00:13:11.596 "queue_depth": 1, 00:13:11.596 "io_size": 131072, 00:13:11.596 "runtime": 1.407497, 00:13:11.596 "iops": 9107.65706783034, 00:13:11.596 "mibps": 1138.4571334787925, 00:13:11.596 "io_failed": 0, 00:13:11.596 "io_timeout": 0, 00:13:11.596 "avg_latency_us": 105.54229020842641, 00:13:11.596 "min_latency_us": 41.192727272727275, 00:13:11.596 "max_latency_us": 1980.9745454545455 00:13:11.596 } 00:13:11.596 ], 00:13:11.596 "core_count": 1 00:13:11.596 } 00:13:11.596 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.596 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69455 00:13:11.596 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' -z 69455 ']' 00:13:11.596 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # kill -0 69455 00:13:11.854 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # uname 00:13:11.854 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:13:11.854 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 69455 00:13:11.854 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:13:11.854 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:13:11.854 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 69455' 00:13:11.854 killing process with pid 69455 00:13:11.854 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # kill 69455 00:13:11.854 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@975 -- # wait 69455 00:13:11.854 [2024-11-27 08:45:08.393381] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.112 [2024-11-27 08:45:08.619914] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:13.488 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Erl9OqhEt9 00:13:13.488 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:13.489 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:13.489 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:13.489 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:13.489 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:13.489 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:13.489 08:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:13.489 00:13:13.489 real 0m4.944s 00:13:13.489 user 0m6.031s 00:13:13.489 sys 0m0.683s 00:13:13.489 08:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:13:13.489 08:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.489 ************************************ 00:13:13.489 END TEST raid_write_error_test 00:13:13.489 ************************************ 00:13:13.489 08:45:09 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:13.489 08:45:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:13.489 08:45:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:13:13.489 08:45:09 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:13:13.489 08:45:09 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:13:13.489 08:45:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:13.489 ************************************ 00:13:13.489 START TEST raid_state_function_test 00:13:13.489 ************************************ 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # raid_state_function_test raid0 4 false 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69599 00:13:13.489 Process raid pid: 69599 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69599' 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69599 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # '[' -z 69599 ']' 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:13:13.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:13:13.489 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.489 [2024-11-27 08:45:10.068199] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:13:13.489 [2024-11-27 08:45:10.068404] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.748 [2024-11-27 08:45:10.263629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.748 [2024-11-27 08:45:10.431587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.006 [2024-11-27 08:45:10.655747] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.006 [2024-11-27 08:45:10.655829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.573 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:13:14.573 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@865 -- # return 0 00:13:14.573 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:14.573 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.573 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.573 [2024-11-27 08:45:11.166913] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.573 [2024-11-27 08:45:11.166997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.573 [2024-11-27 08:45:11.167014] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.573 [2024-11-27 08:45:11.167030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.573 [2024-11-27 08:45:11.167040] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.573 [2024-11-27 08:45:11.167054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.573 [2024-11-27 08:45:11.167064] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:14.573 [2024-11-27 08:45:11.167077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:14.573 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.573 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:14.573 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.573 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.573 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:14.573 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.573 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.573 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.574 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.574 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.574 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.574 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.574 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.574 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.574 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.574 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.574 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.574 "name": "Existed_Raid", 00:13:14.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.574 "strip_size_kb": 64, 00:13:14.574 "state": "configuring", 00:13:14.574 "raid_level": "raid0", 00:13:14.574 "superblock": false, 00:13:14.574 "num_base_bdevs": 4, 00:13:14.574 "num_base_bdevs_discovered": 0, 00:13:14.574 "num_base_bdevs_operational": 4, 00:13:14.574 "base_bdevs_list": [ 00:13:14.574 { 00:13:14.574 "name": "BaseBdev1", 00:13:14.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.574 "is_configured": false, 00:13:14.574 "data_offset": 0, 00:13:14.574 "data_size": 0 00:13:14.574 }, 00:13:14.574 { 00:13:14.574 "name": "BaseBdev2", 00:13:14.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.574 "is_configured": false, 00:13:14.574 "data_offset": 0, 00:13:14.574 "data_size": 0 00:13:14.574 }, 00:13:14.574 { 00:13:14.574 "name": "BaseBdev3", 00:13:14.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.574 "is_configured": false, 00:13:14.574 "data_offset": 0, 00:13:14.574 "data_size": 0 00:13:14.574 }, 00:13:14.574 { 00:13:14.574 "name": "BaseBdev4", 00:13:14.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.574 "is_configured": false, 00:13:14.574 "data_offset": 0, 00:13:14.574 "data_size": 0 00:13:14.574 } 00:13:14.574 ] 00:13:14.574 }' 00:13:14.574 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.574 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.141 [2024-11-27 08:45:11.715071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:15.141 [2024-11-27 08:45:11.715146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.141 [2024-11-27 08:45:11.723029] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:15.141 [2024-11-27 08:45:11.723083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:15.141 [2024-11-27 08:45:11.723114] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:15.141 [2024-11-27 08:45:11.723131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:15.141 [2024-11-27 08:45:11.723142] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:15.141 [2024-11-27 08:45:11.723157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:15.141 [2024-11-27 08:45:11.723166] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:15.141 [2024-11-27 08:45:11.723181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.141 [2024-11-27 08:45:11.773112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.141 BaseBdev1 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:13:15.141 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.142 [ 00:13:15.142 { 00:13:15.142 "name": "BaseBdev1", 00:13:15.142 "aliases": [ 00:13:15.142 "6d886e42-9d8b-4177-9029-68b5e4883c91" 00:13:15.142 ], 00:13:15.142 "product_name": "Malloc disk", 00:13:15.142 "block_size": 512, 00:13:15.142 "num_blocks": 65536, 00:13:15.142 "uuid": "6d886e42-9d8b-4177-9029-68b5e4883c91", 00:13:15.142 "assigned_rate_limits": { 00:13:15.142 "rw_ios_per_sec": 0, 00:13:15.142 "rw_mbytes_per_sec": 0, 00:13:15.142 "r_mbytes_per_sec": 0, 00:13:15.142 "w_mbytes_per_sec": 0 00:13:15.142 }, 00:13:15.142 "claimed": true, 00:13:15.142 "claim_type": "exclusive_write", 00:13:15.142 "zoned": false, 00:13:15.142 "supported_io_types": { 00:13:15.142 "read": true, 00:13:15.142 "write": true, 00:13:15.142 "unmap": true, 00:13:15.142 "flush": true, 00:13:15.142 "reset": true, 00:13:15.142 "nvme_admin": false, 00:13:15.142 "nvme_io": false, 00:13:15.142 "nvme_io_md": false, 00:13:15.142 "write_zeroes": true, 00:13:15.142 "zcopy": true, 00:13:15.142 "get_zone_info": false, 00:13:15.142 "zone_management": false, 00:13:15.142 "zone_append": false, 00:13:15.142 "compare": false, 00:13:15.142 "compare_and_write": false, 00:13:15.142 "abort": true, 00:13:15.142 "seek_hole": false, 00:13:15.142 "seek_data": false, 00:13:15.142 "copy": true, 00:13:15.142 "nvme_iov_md": false 00:13:15.142 }, 00:13:15.142 "memory_domains": [ 00:13:15.142 { 00:13:15.142 "dma_device_id": "system", 00:13:15.142 "dma_device_type": 1 00:13:15.142 }, 00:13:15.142 { 00:13:15.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.142 "dma_device_type": 2 00:13:15.142 } 00:13:15.142 ], 00:13:15.142 "driver_specific": {} 00:13:15.142 } 00:13:15.142 ] 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.142 "name": "Existed_Raid", 00:13:15.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.142 "strip_size_kb": 64, 00:13:15.142 "state": "configuring", 00:13:15.142 "raid_level": "raid0", 00:13:15.142 "superblock": false, 00:13:15.142 "num_base_bdevs": 4, 00:13:15.142 "num_base_bdevs_discovered": 1, 00:13:15.142 "num_base_bdevs_operational": 4, 00:13:15.142 "base_bdevs_list": [ 00:13:15.142 { 00:13:15.142 "name": "BaseBdev1", 00:13:15.142 "uuid": "6d886e42-9d8b-4177-9029-68b5e4883c91", 00:13:15.142 "is_configured": true, 00:13:15.142 "data_offset": 0, 00:13:15.142 "data_size": 65536 00:13:15.142 }, 00:13:15.142 { 00:13:15.142 "name": "BaseBdev2", 00:13:15.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.142 "is_configured": false, 00:13:15.142 "data_offset": 0, 00:13:15.142 "data_size": 0 00:13:15.142 }, 00:13:15.142 { 00:13:15.142 "name": "BaseBdev3", 00:13:15.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.142 "is_configured": false, 00:13:15.142 "data_offset": 0, 00:13:15.142 "data_size": 0 00:13:15.142 }, 00:13:15.142 { 00:13:15.142 "name": "BaseBdev4", 00:13:15.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.142 "is_configured": false, 00:13:15.142 "data_offset": 0, 00:13:15.142 "data_size": 0 00:13:15.142 } 00:13:15.142 ] 00:13:15.142 }' 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.142 08:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.709 [2024-11-27 08:45:12.353410] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:15.709 [2024-11-27 08:45:12.353496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.709 [2024-11-27 08:45:12.361466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.709 [2024-11-27 08:45:12.364319] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:15.709 [2024-11-27 08:45:12.364421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:15.709 [2024-11-27 08:45:12.364439] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:15.709 [2024-11-27 08:45:12.364456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:15.709 [2024-11-27 08:45:12.364467] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:15.709 [2024-11-27 08:45:12.364480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.709 "name": "Existed_Raid", 00:13:15.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.709 "strip_size_kb": 64, 00:13:15.709 "state": "configuring", 00:13:15.709 "raid_level": "raid0", 00:13:15.709 "superblock": false, 00:13:15.709 "num_base_bdevs": 4, 00:13:15.709 "num_base_bdevs_discovered": 1, 00:13:15.709 "num_base_bdevs_operational": 4, 00:13:15.709 "base_bdevs_list": [ 00:13:15.709 { 00:13:15.709 "name": "BaseBdev1", 00:13:15.709 "uuid": "6d886e42-9d8b-4177-9029-68b5e4883c91", 00:13:15.709 "is_configured": true, 00:13:15.709 "data_offset": 0, 00:13:15.709 "data_size": 65536 00:13:15.709 }, 00:13:15.709 { 00:13:15.709 "name": "BaseBdev2", 00:13:15.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.709 "is_configured": false, 00:13:15.709 "data_offset": 0, 00:13:15.709 "data_size": 0 00:13:15.709 }, 00:13:15.709 { 00:13:15.709 "name": "BaseBdev3", 00:13:15.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.709 "is_configured": false, 00:13:15.709 "data_offset": 0, 00:13:15.709 "data_size": 0 00:13:15.709 }, 00:13:15.709 { 00:13:15.709 "name": "BaseBdev4", 00:13:15.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.709 "is_configured": false, 00:13:15.709 "data_offset": 0, 00:13:15.709 "data_size": 0 00:13:15.709 } 00:13:15.709 ] 00:13:15.709 }' 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.709 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.276 [2024-11-27 08:45:12.937267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.276 BaseBdev2 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.276 [ 00:13:16.276 { 00:13:16.276 "name": "BaseBdev2", 00:13:16.276 "aliases": [ 00:13:16.276 "dbc93051-7118-4607-a86d-c3064e2b3957" 00:13:16.276 ], 00:13:16.276 "product_name": "Malloc disk", 00:13:16.276 "block_size": 512, 00:13:16.276 "num_blocks": 65536, 00:13:16.276 "uuid": "dbc93051-7118-4607-a86d-c3064e2b3957", 00:13:16.276 "assigned_rate_limits": { 00:13:16.276 "rw_ios_per_sec": 0, 00:13:16.276 "rw_mbytes_per_sec": 0, 00:13:16.276 "r_mbytes_per_sec": 0, 00:13:16.276 "w_mbytes_per_sec": 0 00:13:16.276 }, 00:13:16.276 "claimed": true, 00:13:16.276 "claim_type": "exclusive_write", 00:13:16.276 "zoned": false, 00:13:16.276 "supported_io_types": { 00:13:16.276 "read": true, 00:13:16.276 "write": true, 00:13:16.276 "unmap": true, 00:13:16.276 "flush": true, 00:13:16.276 "reset": true, 00:13:16.276 "nvme_admin": false, 00:13:16.276 "nvme_io": false, 00:13:16.276 "nvme_io_md": false, 00:13:16.276 "write_zeroes": true, 00:13:16.276 "zcopy": true, 00:13:16.276 "get_zone_info": false, 00:13:16.276 "zone_management": false, 00:13:16.276 "zone_append": false, 00:13:16.276 "compare": false, 00:13:16.276 "compare_and_write": false, 00:13:16.276 "abort": true, 00:13:16.276 "seek_hole": false, 00:13:16.276 "seek_data": false, 00:13:16.276 "copy": true, 00:13:16.276 "nvme_iov_md": false 00:13:16.276 }, 00:13:16.276 "memory_domains": [ 00:13:16.276 { 00:13:16.276 "dma_device_id": "system", 00:13:16.276 "dma_device_type": 1 00:13:16.276 }, 00:13:16.276 { 00:13:16.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.276 "dma_device_type": 2 00:13:16.276 } 00:13:16.276 ], 00:13:16.276 "driver_specific": {} 00:13:16.276 } 00:13:16.276 ] 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.276 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.277 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.277 "name": "Existed_Raid", 00:13:16.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.277 "strip_size_kb": 64, 00:13:16.277 "state": "configuring", 00:13:16.277 "raid_level": "raid0", 00:13:16.277 "superblock": false, 00:13:16.277 "num_base_bdevs": 4, 00:13:16.277 "num_base_bdevs_discovered": 2, 00:13:16.277 "num_base_bdevs_operational": 4, 00:13:16.277 "base_bdevs_list": [ 00:13:16.277 { 00:13:16.277 "name": "BaseBdev1", 00:13:16.277 "uuid": "6d886e42-9d8b-4177-9029-68b5e4883c91", 00:13:16.277 "is_configured": true, 00:13:16.277 "data_offset": 0, 00:13:16.277 "data_size": 65536 00:13:16.277 }, 00:13:16.277 { 00:13:16.277 "name": "BaseBdev2", 00:13:16.277 "uuid": "dbc93051-7118-4607-a86d-c3064e2b3957", 00:13:16.277 "is_configured": true, 00:13:16.277 "data_offset": 0, 00:13:16.277 "data_size": 65536 00:13:16.277 }, 00:13:16.277 { 00:13:16.277 "name": "BaseBdev3", 00:13:16.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.277 "is_configured": false, 00:13:16.277 "data_offset": 0, 00:13:16.277 "data_size": 0 00:13:16.277 }, 00:13:16.277 { 00:13:16.277 "name": "BaseBdev4", 00:13:16.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.277 "is_configured": false, 00:13:16.277 "data_offset": 0, 00:13:16.277 "data_size": 0 00:13:16.277 } 00:13:16.277 ] 00:13:16.277 }' 00:13:16.277 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.277 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.846 [2024-11-27 08:45:13.527333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.846 BaseBdev3 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.846 [ 00:13:16.846 { 00:13:16.846 "name": "BaseBdev3", 00:13:16.846 "aliases": [ 00:13:16.846 "7e6528dd-d287-4dc7-b351-fdd8399b4f07" 00:13:16.846 ], 00:13:16.846 "product_name": "Malloc disk", 00:13:16.846 "block_size": 512, 00:13:16.846 "num_blocks": 65536, 00:13:16.846 "uuid": "7e6528dd-d287-4dc7-b351-fdd8399b4f07", 00:13:16.846 "assigned_rate_limits": { 00:13:16.846 "rw_ios_per_sec": 0, 00:13:16.846 "rw_mbytes_per_sec": 0, 00:13:16.846 "r_mbytes_per_sec": 0, 00:13:16.846 "w_mbytes_per_sec": 0 00:13:16.846 }, 00:13:16.846 "claimed": true, 00:13:16.846 "claim_type": "exclusive_write", 00:13:16.846 "zoned": false, 00:13:16.846 "supported_io_types": { 00:13:16.846 "read": true, 00:13:16.846 "write": true, 00:13:16.846 "unmap": true, 00:13:16.846 "flush": true, 00:13:16.846 "reset": true, 00:13:16.846 "nvme_admin": false, 00:13:16.846 "nvme_io": false, 00:13:16.846 "nvme_io_md": false, 00:13:16.846 "write_zeroes": true, 00:13:16.846 "zcopy": true, 00:13:16.846 "get_zone_info": false, 00:13:16.846 "zone_management": false, 00:13:16.846 "zone_append": false, 00:13:16.846 "compare": false, 00:13:16.846 "compare_and_write": false, 00:13:16.846 "abort": true, 00:13:16.846 "seek_hole": false, 00:13:16.846 "seek_data": false, 00:13:16.846 "copy": true, 00:13:16.846 "nvme_iov_md": false 00:13:16.846 }, 00:13:16.846 "memory_domains": [ 00:13:16.846 { 00:13:16.846 "dma_device_id": "system", 00:13:16.846 "dma_device_type": 1 00:13:16.846 }, 00:13:16.846 { 00:13:16.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.846 "dma_device_type": 2 00:13:16.846 } 00:13:16.846 ], 00:13:16.846 "driver_specific": {} 00:13:16.846 } 00:13:16.846 ] 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:16.846 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:16.847 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:16.847 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.847 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.847 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:16.847 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.847 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.847 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.847 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.847 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.847 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.847 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.847 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.847 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.847 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.847 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.105 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.105 "name": "Existed_Raid", 00:13:17.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.105 "strip_size_kb": 64, 00:13:17.105 "state": "configuring", 00:13:17.105 "raid_level": "raid0", 00:13:17.105 "superblock": false, 00:13:17.105 "num_base_bdevs": 4, 00:13:17.105 "num_base_bdevs_discovered": 3, 00:13:17.105 "num_base_bdevs_operational": 4, 00:13:17.105 "base_bdevs_list": [ 00:13:17.105 { 00:13:17.105 "name": "BaseBdev1", 00:13:17.105 "uuid": "6d886e42-9d8b-4177-9029-68b5e4883c91", 00:13:17.105 "is_configured": true, 00:13:17.105 "data_offset": 0, 00:13:17.105 "data_size": 65536 00:13:17.105 }, 00:13:17.105 { 00:13:17.105 "name": "BaseBdev2", 00:13:17.105 "uuid": "dbc93051-7118-4607-a86d-c3064e2b3957", 00:13:17.105 "is_configured": true, 00:13:17.105 "data_offset": 0, 00:13:17.105 "data_size": 65536 00:13:17.105 }, 00:13:17.105 { 00:13:17.105 "name": "BaseBdev3", 00:13:17.105 "uuid": "7e6528dd-d287-4dc7-b351-fdd8399b4f07", 00:13:17.105 "is_configured": true, 00:13:17.105 "data_offset": 0, 00:13:17.105 "data_size": 65536 00:13:17.105 }, 00:13:17.105 { 00:13:17.105 "name": "BaseBdev4", 00:13:17.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.105 "is_configured": false, 00:13:17.105 "data_offset": 0, 00:13:17.105 "data_size": 0 00:13:17.105 } 00:13:17.105 ] 00:13:17.105 }' 00:13:17.105 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.105 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.364 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:17.364 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.364 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.623 [2024-11-27 08:45:14.130823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:17.623 [2024-11-27 08:45:14.131159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:17.623 [2024-11-27 08:45:14.131185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:17.623 [2024-11-27 08:45:14.131574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:17.623 [2024-11-27 08:45:14.131818] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:17.623 [2024-11-27 08:45:14.131843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:17.623 [2024-11-27 08:45:14.132183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.623 BaseBdev4 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.623 [ 00:13:17.623 { 00:13:17.623 "name": "BaseBdev4", 00:13:17.623 "aliases": [ 00:13:17.623 "3c66eed6-9ad1-41ed-8c46-d1f362c973f3" 00:13:17.623 ], 00:13:17.623 "product_name": "Malloc disk", 00:13:17.623 "block_size": 512, 00:13:17.623 "num_blocks": 65536, 00:13:17.623 "uuid": "3c66eed6-9ad1-41ed-8c46-d1f362c973f3", 00:13:17.623 "assigned_rate_limits": { 00:13:17.623 "rw_ios_per_sec": 0, 00:13:17.623 "rw_mbytes_per_sec": 0, 00:13:17.623 "r_mbytes_per_sec": 0, 00:13:17.623 "w_mbytes_per_sec": 0 00:13:17.623 }, 00:13:17.623 "claimed": true, 00:13:17.623 "claim_type": "exclusive_write", 00:13:17.623 "zoned": false, 00:13:17.623 "supported_io_types": { 00:13:17.623 "read": true, 00:13:17.623 "write": true, 00:13:17.623 "unmap": true, 00:13:17.623 "flush": true, 00:13:17.623 "reset": true, 00:13:17.623 "nvme_admin": false, 00:13:17.623 "nvme_io": false, 00:13:17.623 "nvme_io_md": false, 00:13:17.623 "write_zeroes": true, 00:13:17.623 "zcopy": true, 00:13:17.623 "get_zone_info": false, 00:13:17.623 "zone_management": false, 00:13:17.623 "zone_append": false, 00:13:17.623 "compare": false, 00:13:17.623 "compare_and_write": false, 00:13:17.623 "abort": true, 00:13:17.623 "seek_hole": false, 00:13:17.623 "seek_data": false, 00:13:17.623 "copy": true, 00:13:17.623 "nvme_iov_md": false 00:13:17.623 }, 00:13:17.623 "memory_domains": [ 00:13:17.623 { 00:13:17.623 "dma_device_id": "system", 00:13:17.623 "dma_device_type": 1 00:13:17.623 }, 00:13:17.623 { 00:13:17.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.623 "dma_device_type": 2 00:13:17.623 } 00:13:17.623 ], 00:13:17.623 "driver_specific": {} 00:13:17.623 } 00:13:17.623 ] 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.623 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.624 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.624 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.624 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.624 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.624 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.624 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.624 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.624 "name": "Existed_Raid", 00:13:17.624 "uuid": "3615eedf-18b1-4fad-ac35-65e6c2b4f030", 00:13:17.624 "strip_size_kb": 64, 00:13:17.624 "state": "online", 00:13:17.624 "raid_level": "raid0", 00:13:17.624 "superblock": false, 00:13:17.624 "num_base_bdevs": 4, 00:13:17.624 "num_base_bdevs_discovered": 4, 00:13:17.624 "num_base_bdevs_operational": 4, 00:13:17.624 "base_bdevs_list": [ 00:13:17.624 { 00:13:17.624 "name": "BaseBdev1", 00:13:17.624 "uuid": "6d886e42-9d8b-4177-9029-68b5e4883c91", 00:13:17.624 "is_configured": true, 00:13:17.624 "data_offset": 0, 00:13:17.624 "data_size": 65536 00:13:17.624 }, 00:13:17.624 { 00:13:17.624 "name": "BaseBdev2", 00:13:17.624 "uuid": "dbc93051-7118-4607-a86d-c3064e2b3957", 00:13:17.624 "is_configured": true, 00:13:17.624 "data_offset": 0, 00:13:17.624 "data_size": 65536 00:13:17.624 }, 00:13:17.624 { 00:13:17.624 "name": "BaseBdev3", 00:13:17.624 "uuid": "7e6528dd-d287-4dc7-b351-fdd8399b4f07", 00:13:17.624 "is_configured": true, 00:13:17.624 "data_offset": 0, 00:13:17.624 "data_size": 65536 00:13:17.624 }, 00:13:17.624 { 00:13:17.624 "name": "BaseBdev4", 00:13:17.624 "uuid": "3c66eed6-9ad1-41ed-8c46-d1f362c973f3", 00:13:17.624 "is_configured": true, 00:13:17.624 "data_offset": 0, 00:13:17.624 "data_size": 65536 00:13:17.624 } 00:13:17.624 ] 00:13:17.624 }' 00:13:17.624 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.624 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:18.191 [2024-11-27 08:45:14.715576] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:18.191 "name": "Existed_Raid", 00:13:18.191 "aliases": [ 00:13:18.191 "3615eedf-18b1-4fad-ac35-65e6c2b4f030" 00:13:18.191 ], 00:13:18.191 "product_name": "Raid Volume", 00:13:18.191 "block_size": 512, 00:13:18.191 "num_blocks": 262144, 00:13:18.191 "uuid": "3615eedf-18b1-4fad-ac35-65e6c2b4f030", 00:13:18.191 "assigned_rate_limits": { 00:13:18.191 "rw_ios_per_sec": 0, 00:13:18.191 "rw_mbytes_per_sec": 0, 00:13:18.191 "r_mbytes_per_sec": 0, 00:13:18.191 "w_mbytes_per_sec": 0 00:13:18.191 }, 00:13:18.191 "claimed": false, 00:13:18.191 "zoned": false, 00:13:18.191 "supported_io_types": { 00:13:18.191 "read": true, 00:13:18.191 "write": true, 00:13:18.191 "unmap": true, 00:13:18.191 "flush": true, 00:13:18.191 "reset": true, 00:13:18.191 "nvme_admin": false, 00:13:18.191 "nvme_io": false, 00:13:18.191 "nvme_io_md": false, 00:13:18.191 "write_zeroes": true, 00:13:18.191 "zcopy": false, 00:13:18.191 "get_zone_info": false, 00:13:18.191 "zone_management": false, 00:13:18.191 "zone_append": false, 00:13:18.191 "compare": false, 00:13:18.191 "compare_and_write": false, 00:13:18.191 "abort": false, 00:13:18.191 "seek_hole": false, 00:13:18.191 "seek_data": false, 00:13:18.191 "copy": false, 00:13:18.191 "nvme_iov_md": false 00:13:18.191 }, 00:13:18.191 "memory_domains": [ 00:13:18.191 { 00:13:18.191 "dma_device_id": "system", 00:13:18.191 "dma_device_type": 1 00:13:18.191 }, 00:13:18.191 { 00:13:18.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.191 "dma_device_type": 2 00:13:18.191 }, 00:13:18.191 { 00:13:18.191 "dma_device_id": "system", 00:13:18.191 "dma_device_type": 1 00:13:18.191 }, 00:13:18.191 { 00:13:18.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.191 "dma_device_type": 2 00:13:18.191 }, 00:13:18.191 { 00:13:18.191 "dma_device_id": "system", 00:13:18.191 "dma_device_type": 1 00:13:18.191 }, 00:13:18.191 { 00:13:18.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.191 "dma_device_type": 2 00:13:18.191 }, 00:13:18.191 { 00:13:18.191 "dma_device_id": "system", 00:13:18.191 "dma_device_type": 1 00:13:18.191 }, 00:13:18.191 { 00:13:18.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.191 "dma_device_type": 2 00:13:18.191 } 00:13:18.191 ], 00:13:18.191 "driver_specific": { 00:13:18.191 "raid": { 00:13:18.191 "uuid": "3615eedf-18b1-4fad-ac35-65e6c2b4f030", 00:13:18.191 "strip_size_kb": 64, 00:13:18.191 "state": "online", 00:13:18.191 "raid_level": "raid0", 00:13:18.191 "superblock": false, 00:13:18.191 "num_base_bdevs": 4, 00:13:18.191 "num_base_bdevs_discovered": 4, 00:13:18.191 "num_base_bdevs_operational": 4, 00:13:18.191 "base_bdevs_list": [ 00:13:18.191 { 00:13:18.191 "name": "BaseBdev1", 00:13:18.191 "uuid": "6d886e42-9d8b-4177-9029-68b5e4883c91", 00:13:18.191 "is_configured": true, 00:13:18.191 "data_offset": 0, 00:13:18.191 "data_size": 65536 00:13:18.191 }, 00:13:18.191 { 00:13:18.191 "name": "BaseBdev2", 00:13:18.191 "uuid": "dbc93051-7118-4607-a86d-c3064e2b3957", 00:13:18.191 "is_configured": true, 00:13:18.191 "data_offset": 0, 00:13:18.191 "data_size": 65536 00:13:18.191 }, 00:13:18.191 { 00:13:18.191 "name": "BaseBdev3", 00:13:18.191 "uuid": "7e6528dd-d287-4dc7-b351-fdd8399b4f07", 00:13:18.191 "is_configured": true, 00:13:18.191 "data_offset": 0, 00:13:18.191 "data_size": 65536 00:13:18.191 }, 00:13:18.191 { 00:13:18.191 "name": "BaseBdev4", 00:13:18.191 "uuid": "3c66eed6-9ad1-41ed-8c46-d1f362c973f3", 00:13:18.191 "is_configured": true, 00:13:18.191 "data_offset": 0, 00:13:18.191 "data_size": 65536 00:13:18.191 } 00:13:18.191 ] 00:13:18.191 } 00:13:18.191 } 00:13:18.191 }' 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:18.191 BaseBdev2 00:13:18.191 BaseBdev3 00:13:18.191 BaseBdev4' 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.191 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.450 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.450 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.450 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.450 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:18.450 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.450 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.450 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.450 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.450 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.450 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.450 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.450 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:18.450 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.450 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.450 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.450 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.450 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.450 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.450 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.451 [2024-11-27 08:45:15.091262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:18.451 [2024-11-27 08:45:15.091475] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.451 [2024-11-27 08:45:15.091582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.451 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.708 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.708 "name": "Existed_Raid", 00:13:18.708 "uuid": "3615eedf-18b1-4fad-ac35-65e6c2b4f030", 00:13:18.708 "strip_size_kb": 64, 00:13:18.708 "state": "offline", 00:13:18.708 "raid_level": "raid0", 00:13:18.708 "superblock": false, 00:13:18.708 "num_base_bdevs": 4, 00:13:18.708 "num_base_bdevs_discovered": 3, 00:13:18.708 "num_base_bdevs_operational": 3, 00:13:18.708 "base_bdevs_list": [ 00:13:18.708 { 00:13:18.708 "name": null, 00:13:18.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.708 "is_configured": false, 00:13:18.708 "data_offset": 0, 00:13:18.708 "data_size": 65536 00:13:18.708 }, 00:13:18.708 { 00:13:18.708 "name": "BaseBdev2", 00:13:18.708 "uuid": "dbc93051-7118-4607-a86d-c3064e2b3957", 00:13:18.708 "is_configured": true, 00:13:18.708 "data_offset": 0, 00:13:18.708 "data_size": 65536 00:13:18.708 }, 00:13:18.709 { 00:13:18.709 "name": "BaseBdev3", 00:13:18.709 "uuid": "7e6528dd-d287-4dc7-b351-fdd8399b4f07", 00:13:18.709 "is_configured": true, 00:13:18.709 "data_offset": 0, 00:13:18.709 "data_size": 65536 00:13:18.709 }, 00:13:18.709 { 00:13:18.709 "name": "BaseBdev4", 00:13:18.709 "uuid": "3c66eed6-9ad1-41ed-8c46-d1f362c973f3", 00:13:18.709 "is_configured": true, 00:13:18.709 "data_offset": 0, 00:13:18.709 "data_size": 65536 00:13:18.709 } 00:13:18.709 ] 00:13:18.709 }' 00:13:18.709 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.709 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.967 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:18.967 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:18.967 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.967 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.967 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:18.967 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.225 [2024-11-27 08:45:15.765125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.225 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.225 [2024-11-27 08:45:15.923466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.484 [2024-11-27 08:45:16.077819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:19.484 [2024-11-27 08:45:16.077891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.484 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.742 BaseBdev2 00:13:19.742 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.742 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:19.742 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:13:19.742 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:19.742 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:13:19.742 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:19.742 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:19.742 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:19.742 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.742 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.743 [ 00:13:19.743 { 00:13:19.743 "name": "BaseBdev2", 00:13:19.743 "aliases": [ 00:13:19.743 "26f06122-44a6-42d2-a13d-16e2904b042d" 00:13:19.743 ], 00:13:19.743 "product_name": "Malloc disk", 00:13:19.743 "block_size": 512, 00:13:19.743 "num_blocks": 65536, 00:13:19.743 "uuid": "26f06122-44a6-42d2-a13d-16e2904b042d", 00:13:19.743 "assigned_rate_limits": { 00:13:19.743 "rw_ios_per_sec": 0, 00:13:19.743 "rw_mbytes_per_sec": 0, 00:13:19.743 "r_mbytes_per_sec": 0, 00:13:19.743 "w_mbytes_per_sec": 0 00:13:19.743 }, 00:13:19.743 "claimed": false, 00:13:19.743 "zoned": false, 00:13:19.743 "supported_io_types": { 00:13:19.743 "read": true, 00:13:19.743 "write": true, 00:13:19.743 "unmap": true, 00:13:19.743 "flush": true, 00:13:19.743 "reset": true, 00:13:19.743 "nvme_admin": false, 00:13:19.743 "nvme_io": false, 00:13:19.743 "nvme_io_md": false, 00:13:19.743 "write_zeroes": true, 00:13:19.743 "zcopy": true, 00:13:19.743 "get_zone_info": false, 00:13:19.743 "zone_management": false, 00:13:19.743 "zone_append": false, 00:13:19.743 "compare": false, 00:13:19.743 "compare_and_write": false, 00:13:19.743 "abort": true, 00:13:19.743 "seek_hole": false, 00:13:19.743 "seek_data": false, 00:13:19.743 "copy": true, 00:13:19.743 "nvme_iov_md": false 00:13:19.743 }, 00:13:19.743 "memory_domains": [ 00:13:19.743 { 00:13:19.743 "dma_device_id": "system", 00:13:19.743 "dma_device_type": 1 00:13:19.743 }, 00:13:19.743 { 00:13:19.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.743 "dma_device_type": 2 00:13:19.743 } 00:13:19.743 ], 00:13:19.743 "driver_specific": {} 00:13:19.743 } 00:13:19.743 ] 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.743 BaseBdev3 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.743 [ 00:13:19.743 { 00:13:19.743 "name": "BaseBdev3", 00:13:19.743 "aliases": [ 00:13:19.743 "67556a45-dff5-4bc8-a6e7-e32a2de44fd8" 00:13:19.743 ], 00:13:19.743 "product_name": "Malloc disk", 00:13:19.743 "block_size": 512, 00:13:19.743 "num_blocks": 65536, 00:13:19.743 "uuid": "67556a45-dff5-4bc8-a6e7-e32a2de44fd8", 00:13:19.743 "assigned_rate_limits": { 00:13:19.743 "rw_ios_per_sec": 0, 00:13:19.743 "rw_mbytes_per_sec": 0, 00:13:19.743 "r_mbytes_per_sec": 0, 00:13:19.743 "w_mbytes_per_sec": 0 00:13:19.743 }, 00:13:19.743 "claimed": false, 00:13:19.743 "zoned": false, 00:13:19.743 "supported_io_types": { 00:13:19.743 "read": true, 00:13:19.743 "write": true, 00:13:19.743 "unmap": true, 00:13:19.743 "flush": true, 00:13:19.743 "reset": true, 00:13:19.743 "nvme_admin": false, 00:13:19.743 "nvme_io": false, 00:13:19.743 "nvme_io_md": false, 00:13:19.743 "write_zeroes": true, 00:13:19.743 "zcopy": true, 00:13:19.743 "get_zone_info": false, 00:13:19.743 "zone_management": false, 00:13:19.743 "zone_append": false, 00:13:19.743 "compare": false, 00:13:19.743 "compare_and_write": false, 00:13:19.743 "abort": true, 00:13:19.743 "seek_hole": false, 00:13:19.743 "seek_data": false, 00:13:19.743 "copy": true, 00:13:19.743 "nvme_iov_md": false 00:13:19.743 }, 00:13:19.743 "memory_domains": [ 00:13:19.743 { 00:13:19.743 "dma_device_id": "system", 00:13:19.743 "dma_device_type": 1 00:13:19.743 }, 00:13:19.743 { 00:13:19.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.743 "dma_device_type": 2 00:13:19.743 } 00:13:19.743 ], 00:13:19.743 "driver_specific": {} 00:13:19.743 } 00:13:19.743 ] 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.743 BaseBdev4 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.743 [ 00:13:19.743 { 00:13:19.743 "name": "BaseBdev4", 00:13:19.743 "aliases": [ 00:13:19.743 "7a148889-afff-466d-bb9c-257d101077d9" 00:13:19.743 ], 00:13:19.743 "product_name": "Malloc disk", 00:13:19.743 "block_size": 512, 00:13:19.743 "num_blocks": 65536, 00:13:19.743 "uuid": "7a148889-afff-466d-bb9c-257d101077d9", 00:13:19.743 "assigned_rate_limits": { 00:13:19.743 "rw_ios_per_sec": 0, 00:13:19.743 "rw_mbytes_per_sec": 0, 00:13:19.743 "r_mbytes_per_sec": 0, 00:13:19.743 "w_mbytes_per_sec": 0 00:13:19.743 }, 00:13:19.743 "claimed": false, 00:13:19.743 "zoned": false, 00:13:19.743 "supported_io_types": { 00:13:19.743 "read": true, 00:13:19.743 "write": true, 00:13:19.743 "unmap": true, 00:13:19.743 "flush": true, 00:13:19.743 "reset": true, 00:13:19.743 "nvme_admin": false, 00:13:19.743 "nvme_io": false, 00:13:19.743 "nvme_io_md": false, 00:13:19.743 "write_zeroes": true, 00:13:19.743 "zcopy": true, 00:13:19.743 "get_zone_info": false, 00:13:19.743 "zone_management": false, 00:13:19.743 "zone_append": false, 00:13:19.743 "compare": false, 00:13:19.743 "compare_and_write": false, 00:13:19.743 "abort": true, 00:13:19.743 "seek_hole": false, 00:13:19.743 "seek_data": false, 00:13:19.743 "copy": true, 00:13:19.743 "nvme_iov_md": false 00:13:19.743 }, 00:13:19.743 "memory_domains": [ 00:13:19.743 { 00:13:19.743 "dma_device_id": "system", 00:13:19.743 "dma_device_type": 1 00:13:19.743 }, 00:13:19.743 { 00:13:19.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.743 "dma_device_type": 2 00:13:19.743 } 00:13:19.743 ], 00:13:19.743 "driver_specific": {} 00:13:19.743 } 00:13:19.743 ] 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.743 [2024-11-27 08:45:16.473172] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:19.743 [2024-11-27 08:45:16.473243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:19.743 [2024-11-27 08:45:16.473303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.743 [2024-11-27 08:45:16.476110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:19.743 [2024-11-27 08:45:16.476183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.743 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.001 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.001 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.001 "name": "Existed_Raid", 00:13:20.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.001 "strip_size_kb": 64, 00:13:20.001 "state": "configuring", 00:13:20.001 "raid_level": "raid0", 00:13:20.001 "superblock": false, 00:13:20.001 "num_base_bdevs": 4, 00:13:20.001 "num_base_bdevs_discovered": 3, 00:13:20.001 "num_base_bdevs_operational": 4, 00:13:20.001 "base_bdevs_list": [ 00:13:20.001 { 00:13:20.001 "name": "BaseBdev1", 00:13:20.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.001 "is_configured": false, 00:13:20.001 "data_offset": 0, 00:13:20.001 "data_size": 0 00:13:20.001 }, 00:13:20.001 { 00:13:20.001 "name": "BaseBdev2", 00:13:20.001 "uuid": "26f06122-44a6-42d2-a13d-16e2904b042d", 00:13:20.001 "is_configured": true, 00:13:20.001 "data_offset": 0, 00:13:20.001 "data_size": 65536 00:13:20.001 }, 00:13:20.001 { 00:13:20.001 "name": "BaseBdev3", 00:13:20.001 "uuid": "67556a45-dff5-4bc8-a6e7-e32a2de44fd8", 00:13:20.001 "is_configured": true, 00:13:20.001 "data_offset": 0, 00:13:20.001 "data_size": 65536 00:13:20.001 }, 00:13:20.001 { 00:13:20.001 "name": "BaseBdev4", 00:13:20.001 "uuid": "7a148889-afff-466d-bb9c-257d101077d9", 00:13:20.001 "is_configured": true, 00:13:20.001 "data_offset": 0, 00:13:20.001 "data_size": 65536 00:13:20.001 } 00:13:20.001 ] 00:13:20.001 }' 00:13:20.001 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.001 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.634 [2024-11-27 08:45:17.037295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.634 "name": "Existed_Raid", 00:13:20.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.634 "strip_size_kb": 64, 00:13:20.634 "state": "configuring", 00:13:20.634 "raid_level": "raid0", 00:13:20.634 "superblock": false, 00:13:20.634 "num_base_bdevs": 4, 00:13:20.634 "num_base_bdevs_discovered": 2, 00:13:20.634 "num_base_bdevs_operational": 4, 00:13:20.634 "base_bdevs_list": [ 00:13:20.634 { 00:13:20.634 "name": "BaseBdev1", 00:13:20.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.634 "is_configured": false, 00:13:20.634 "data_offset": 0, 00:13:20.634 "data_size": 0 00:13:20.634 }, 00:13:20.634 { 00:13:20.634 "name": null, 00:13:20.634 "uuid": "26f06122-44a6-42d2-a13d-16e2904b042d", 00:13:20.634 "is_configured": false, 00:13:20.634 "data_offset": 0, 00:13:20.634 "data_size": 65536 00:13:20.634 }, 00:13:20.634 { 00:13:20.634 "name": "BaseBdev3", 00:13:20.634 "uuid": "67556a45-dff5-4bc8-a6e7-e32a2de44fd8", 00:13:20.634 "is_configured": true, 00:13:20.634 "data_offset": 0, 00:13:20.634 "data_size": 65536 00:13:20.634 }, 00:13:20.634 { 00:13:20.634 "name": "BaseBdev4", 00:13:20.634 "uuid": "7a148889-afff-466d-bb9c-257d101077d9", 00:13:20.634 "is_configured": true, 00:13:20.634 "data_offset": 0, 00:13:20.634 "data_size": 65536 00:13:20.634 } 00:13:20.634 ] 00:13:20.634 }' 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.634 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.936 [2024-11-27 08:45:17.687249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.936 BaseBdev1 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.936 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.194 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.194 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:21.194 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.194 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.194 [ 00:13:21.194 { 00:13:21.194 "name": "BaseBdev1", 00:13:21.194 "aliases": [ 00:13:21.194 "477247e5-ea49-4816-87d1-8d64a78e62f8" 00:13:21.194 ], 00:13:21.194 "product_name": "Malloc disk", 00:13:21.194 "block_size": 512, 00:13:21.194 "num_blocks": 65536, 00:13:21.194 "uuid": "477247e5-ea49-4816-87d1-8d64a78e62f8", 00:13:21.194 "assigned_rate_limits": { 00:13:21.194 "rw_ios_per_sec": 0, 00:13:21.194 "rw_mbytes_per_sec": 0, 00:13:21.194 "r_mbytes_per_sec": 0, 00:13:21.194 "w_mbytes_per_sec": 0 00:13:21.194 }, 00:13:21.194 "claimed": true, 00:13:21.194 "claim_type": "exclusive_write", 00:13:21.194 "zoned": false, 00:13:21.194 "supported_io_types": { 00:13:21.194 "read": true, 00:13:21.194 "write": true, 00:13:21.194 "unmap": true, 00:13:21.194 "flush": true, 00:13:21.194 "reset": true, 00:13:21.194 "nvme_admin": false, 00:13:21.194 "nvme_io": false, 00:13:21.194 "nvme_io_md": false, 00:13:21.194 "write_zeroes": true, 00:13:21.194 "zcopy": true, 00:13:21.194 "get_zone_info": false, 00:13:21.194 "zone_management": false, 00:13:21.194 "zone_append": false, 00:13:21.194 "compare": false, 00:13:21.194 "compare_and_write": false, 00:13:21.194 "abort": true, 00:13:21.194 "seek_hole": false, 00:13:21.194 "seek_data": false, 00:13:21.194 "copy": true, 00:13:21.194 "nvme_iov_md": false 00:13:21.194 }, 00:13:21.194 "memory_domains": [ 00:13:21.194 { 00:13:21.194 "dma_device_id": "system", 00:13:21.194 "dma_device_type": 1 00:13:21.194 }, 00:13:21.194 { 00:13:21.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.194 "dma_device_type": 2 00:13:21.194 } 00:13:21.194 ], 00:13:21.194 "driver_specific": {} 00:13:21.194 } 00:13:21.194 ] 00:13:21.194 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.194 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:13:21.194 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:21.194 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.194 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.194 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:21.195 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.195 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.195 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.195 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.195 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.195 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.195 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.195 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.195 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.195 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.195 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.195 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.195 "name": "Existed_Raid", 00:13:21.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.195 "strip_size_kb": 64, 00:13:21.195 "state": "configuring", 00:13:21.195 "raid_level": "raid0", 00:13:21.195 "superblock": false, 00:13:21.195 "num_base_bdevs": 4, 00:13:21.195 "num_base_bdevs_discovered": 3, 00:13:21.195 "num_base_bdevs_operational": 4, 00:13:21.195 "base_bdevs_list": [ 00:13:21.195 { 00:13:21.195 "name": "BaseBdev1", 00:13:21.195 "uuid": "477247e5-ea49-4816-87d1-8d64a78e62f8", 00:13:21.195 "is_configured": true, 00:13:21.195 "data_offset": 0, 00:13:21.195 "data_size": 65536 00:13:21.195 }, 00:13:21.195 { 00:13:21.195 "name": null, 00:13:21.195 "uuid": "26f06122-44a6-42d2-a13d-16e2904b042d", 00:13:21.195 "is_configured": false, 00:13:21.195 "data_offset": 0, 00:13:21.195 "data_size": 65536 00:13:21.195 }, 00:13:21.195 { 00:13:21.195 "name": "BaseBdev3", 00:13:21.195 "uuid": "67556a45-dff5-4bc8-a6e7-e32a2de44fd8", 00:13:21.195 "is_configured": true, 00:13:21.195 "data_offset": 0, 00:13:21.195 "data_size": 65536 00:13:21.195 }, 00:13:21.195 { 00:13:21.195 "name": "BaseBdev4", 00:13:21.195 "uuid": "7a148889-afff-466d-bb9c-257d101077d9", 00:13:21.195 "is_configured": true, 00:13:21.195 "data_offset": 0, 00:13:21.195 "data_size": 65536 00:13:21.195 } 00:13:21.195 ] 00:13:21.195 }' 00:13:21.195 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.195 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.761 [2024-11-27 08:45:18.335558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.761 "name": "Existed_Raid", 00:13:21.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.761 "strip_size_kb": 64, 00:13:21.761 "state": "configuring", 00:13:21.761 "raid_level": "raid0", 00:13:21.761 "superblock": false, 00:13:21.761 "num_base_bdevs": 4, 00:13:21.761 "num_base_bdevs_discovered": 2, 00:13:21.761 "num_base_bdevs_operational": 4, 00:13:21.761 "base_bdevs_list": [ 00:13:21.761 { 00:13:21.761 "name": "BaseBdev1", 00:13:21.761 "uuid": "477247e5-ea49-4816-87d1-8d64a78e62f8", 00:13:21.761 "is_configured": true, 00:13:21.761 "data_offset": 0, 00:13:21.761 "data_size": 65536 00:13:21.761 }, 00:13:21.761 { 00:13:21.761 "name": null, 00:13:21.761 "uuid": "26f06122-44a6-42d2-a13d-16e2904b042d", 00:13:21.761 "is_configured": false, 00:13:21.761 "data_offset": 0, 00:13:21.761 "data_size": 65536 00:13:21.761 }, 00:13:21.761 { 00:13:21.761 "name": null, 00:13:21.761 "uuid": "67556a45-dff5-4bc8-a6e7-e32a2de44fd8", 00:13:21.761 "is_configured": false, 00:13:21.761 "data_offset": 0, 00:13:21.761 "data_size": 65536 00:13:21.761 }, 00:13:21.761 { 00:13:21.761 "name": "BaseBdev4", 00:13:21.761 "uuid": "7a148889-afff-466d-bb9c-257d101077d9", 00:13:21.761 "is_configured": true, 00:13:21.761 "data_offset": 0, 00:13:21.761 "data_size": 65536 00:13:21.761 } 00:13:21.761 ] 00:13:21.761 }' 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.761 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 [2024-11-27 08:45:18.947722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.333 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.333 "name": "Existed_Raid", 00:13:22.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.333 "strip_size_kb": 64, 00:13:22.333 "state": "configuring", 00:13:22.333 "raid_level": "raid0", 00:13:22.333 "superblock": false, 00:13:22.334 "num_base_bdevs": 4, 00:13:22.334 "num_base_bdevs_discovered": 3, 00:13:22.334 "num_base_bdevs_operational": 4, 00:13:22.334 "base_bdevs_list": [ 00:13:22.334 { 00:13:22.334 "name": "BaseBdev1", 00:13:22.334 "uuid": "477247e5-ea49-4816-87d1-8d64a78e62f8", 00:13:22.334 "is_configured": true, 00:13:22.334 "data_offset": 0, 00:13:22.334 "data_size": 65536 00:13:22.334 }, 00:13:22.334 { 00:13:22.334 "name": null, 00:13:22.334 "uuid": "26f06122-44a6-42d2-a13d-16e2904b042d", 00:13:22.334 "is_configured": false, 00:13:22.334 "data_offset": 0, 00:13:22.334 "data_size": 65536 00:13:22.334 }, 00:13:22.334 { 00:13:22.334 "name": "BaseBdev3", 00:13:22.334 "uuid": "67556a45-dff5-4bc8-a6e7-e32a2de44fd8", 00:13:22.334 "is_configured": true, 00:13:22.334 "data_offset": 0, 00:13:22.334 "data_size": 65536 00:13:22.334 }, 00:13:22.334 { 00:13:22.334 "name": "BaseBdev4", 00:13:22.334 "uuid": "7a148889-afff-466d-bb9c-257d101077d9", 00:13:22.334 "is_configured": true, 00:13:22.334 "data_offset": 0, 00:13:22.334 "data_size": 65536 00:13:22.334 } 00:13:22.334 ] 00:13:22.334 }' 00:13:22.334 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.334 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.908 [2024-11-27 08:45:19.507919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.908 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.166 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.166 "name": "Existed_Raid", 00:13:23.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.166 "strip_size_kb": 64, 00:13:23.166 "state": "configuring", 00:13:23.166 "raid_level": "raid0", 00:13:23.166 "superblock": false, 00:13:23.166 "num_base_bdevs": 4, 00:13:23.166 "num_base_bdevs_discovered": 2, 00:13:23.166 "num_base_bdevs_operational": 4, 00:13:23.166 "base_bdevs_list": [ 00:13:23.166 { 00:13:23.166 "name": null, 00:13:23.166 "uuid": "477247e5-ea49-4816-87d1-8d64a78e62f8", 00:13:23.166 "is_configured": false, 00:13:23.166 "data_offset": 0, 00:13:23.166 "data_size": 65536 00:13:23.166 }, 00:13:23.166 { 00:13:23.166 "name": null, 00:13:23.166 "uuid": "26f06122-44a6-42d2-a13d-16e2904b042d", 00:13:23.166 "is_configured": false, 00:13:23.166 "data_offset": 0, 00:13:23.166 "data_size": 65536 00:13:23.166 }, 00:13:23.166 { 00:13:23.166 "name": "BaseBdev3", 00:13:23.166 "uuid": "67556a45-dff5-4bc8-a6e7-e32a2de44fd8", 00:13:23.166 "is_configured": true, 00:13:23.166 "data_offset": 0, 00:13:23.166 "data_size": 65536 00:13:23.166 }, 00:13:23.166 { 00:13:23.166 "name": "BaseBdev4", 00:13:23.166 "uuid": "7a148889-afff-466d-bb9c-257d101077d9", 00:13:23.166 "is_configured": true, 00:13:23.166 "data_offset": 0, 00:13:23.166 "data_size": 65536 00:13:23.166 } 00:13:23.166 ] 00:13:23.166 }' 00:13:23.166 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.166 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.425 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.425 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.425 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.425 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:23.425 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.683 [2024-11-27 08:45:20.186652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.683 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.683 "name": "Existed_Raid", 00:13:23.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.683 "strip_size_kb": 64, 00:13:23.683 "state": "configuring", 00:13:23.683 "raid_level": "raid0", 00:13:23.683 "superblock": false, 00:13:23.683 "num_base_bdevs": 4, 00:13:23.683 "num_base_bdevs_discovered": 3, 00:13:23.683 "num_base_bdevs_operational": 4, 00:13:23.683 "base_bdevs_list": [ 00:13:23.683 { 00:13:23.683 "name": null, 00:13:23.683 "uuid": "477247e5-ea49-4816-87d1-8d64a78e62f8", 00:13:23.683 "is_configured": false, 00:13:23.683 "data_offset": 0, 00:13:23.683 "data_size": 65536 00:13:23.683 }, 00:13:23.683 { 00:13:23.683 "name": "BaseBdev2", 00:13:23.683 "uuid": "26f06122-44a6-42d2-a13d-16e2904b042d", 00:13:23.683 "is_configured": true, 00:13:23.683 "data_offset": 0, 00:13:23.683 "data_size": 65536 00:13:23.683 }, 00:13:23.683 { 00:13:23.683 "name": "BaseBdev3", 00:13:23.683 "uuid": "67556a45-dff5-4bc8-a6e7-e32a2de44fd8", 00:13:23.683 "is_configured": true, 00:13:23.683 "data_offset": 0, 00:13:23.683 "data_size": 65536 00:13:23.683 }, 00:13:23.684 { 00:13:23.684 "name": "BaseBdev4", 00:13:23.684 "uuid": "7a148889-afff-466d-bb9c-257d101077d9", 00:13:23.684 "is_configured": true, 00:13:23.684 "data_offset": 0, 00:13:23.684 "data_size": 65536 00:13:23.684 } 00:13:23.684 ] 00:13:23.684 }' 00:13:23.684 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.684 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.010 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:24.010 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.010 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.010 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.010 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 477247e5-ea49-4816-87d1-8d64a78e62f8 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.270 [2024-11-27 08:45:20.896115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:24.270 [2024-11-27 08:45:20.896212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:24.270 [2024-11-27 08:45:20.896225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:24.270 [2024-11-27 08:45:20.896651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:24.270 [2024-11-27 08:45:20.896858] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:24.270 [2024-11-27 08:45:20.896880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:24.270 [2024-11-27 08:45:20.897215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.270 NewBaseBdev 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.270 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.270 [ 00:13:24.270 { 00:13:24.270 "name": "NewBaseBdev", 00:13:24.270 "aliases": [ 00:13:24.270 "477247e5-ea49-4816-87d1-8d64a78e62f8" 00:13:24.270 ], 00:13:24.270 "product_name": "Malloc disk", 00:13:24.270 "block_size": 512, 00:13:24.270 "num_blocks": 65536, 00:13:24.270 "uuid": "477247e5-ea49-4816-87d1-8d64a78e62f8", 00:13:24.270 "assigned_rate_limits": { 00:13:24.270 "rw_ios_per_sec": 0, 00:13:24.270 "rw_mbytes_per_sec": 0, 00:13:24.270 "r_mbytes_per_sec": 0, 00:13:24.270 "w_mbytes_per_sec": 0 00:13:24.270 }, 00:13:24.270 "claimed": true, 00:13:24.270 "claim_type": "exclusive_write", 00:13:24.270 "zoned": false, 00:13:24.270 "supported_io_types": { 00:13:24.270 "read": true, 00:13:24.270 "write": true, 00:13:24.270 "unmap": true, 00:13:24.270 "flush": true, 00:13:24.270 "reset": true, 00:13:24.270 "nvme_admin": false, 00:13:24.270 "nvme_io": false, 00:13:24.270 "nvme_io_md": false, 00:13:24.270 "write_zeroes": true, 00:13:24.270 "zcopy": true, 00:13:24.270 "get_zone_info": false, 00:13:24.270 "zone_management": false, 00:13:24.270 "zone_append": false, 00:13:24.270 "compare": false, 00:13:24.271 "compare_and_write": false, 00:13:24.271 "abort": true, 00:13:24.271 "seek_hole": false, 00:13:24.271 "seek_data": false, 00:13:24.271 "copy": true, 00:13:24.271 "nvme_iov_md": false 00:13:24.271 }, 00:13:24.271 "memory_domains": [ 00:13:24.271 { 00:13:24.271 "dma_device_id": "system", 00:13:24.271 "dma_device_type": 1 00:13:24.271 }, 00:13:24.271 { 00:13:24.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.271 "dma_device_type": 2 00:13:24.271 } 00:13:24.271 ], 00:13:24.271 "driver_specific": {} 00:13:24.271 } 00:13:24.271 ] 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.271 "name": "Existed_Raid", 00:13:24.271 "uuid": "f9e4b060-47b3-44ee-847d-04a890324705", 00:13:24.271 "strip_size_kb": 64, 00:13:24.271 "state": "online", 00:13:24.271 "raid_level": "raid0", 00:13:24.271 "superblock": false, 00:13:24.271 "num_base_bdevs": 4, 00:13:24.271 "num_base_bdevs_discovered": 4, 00:13:24.271 "num_base_bdevs_operational": 4, 00:13:24.271 "base_bdevs_list": [ 00:13:24.271 { 00:13:24.271 "name": "NewBaseBdev", 00:13:24.271 "uuid": "477247e5-ea49-4816-87d1-8d64a78e62f8", 00:13:24.271 "is_configured": true, 00:13:24.271 "data_offset": 0, 00:13:24.271 "data_size": 65536 00:13:24.271 }, 00:13:24.271 { 00:13:24.271 "name": "BaseBdev2", 00:13:24.271 "uuid": "26f06122-44a6-42d2-a13d-16e2904b042d", 00:13:24.271 "is_configured": true, 00:13:24.271 "data_offset": 0, 00:13:24.271 "data_size": 65536 00:13:24.271 }, 00:13:24.271 { 00:13:24.271 "name": "BaseBdev3", 00:13:24.271 "uuid": "67556a45-dff5-4bc8-a6e7-e32a2de44fd8", 00:13:24.271 "is_configured": true, 00:13:24.271 "data_offset": 0, 00:13:24.271 "data_size": 65536 00:13:24.271 }, 00:13:24.271 { 00:13:24.271 "name": "BaseBdev4", 00:13:24.271 "uuid": "7a148889-afff-466d-bb9c-257d101077d9", 00:13:24.271 "is_configured": true, 00:13:24.271 "data_offset": 0, 00:13:24.271 "data_size": 65536 00:13:24.271 } 00:13:24.271 ] 00:13:24.271 }' 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.271 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.840 [2024-11-27 08:45:21.412796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:24.840 "name": "Existed_Raid", 00:13:24.840 "aliases": [ 00:13:24.840 "f9e4b060-47b3-44ee-847d-04a890324705" 00:13:24.840 ], 00:13:24.840 "product_name": "Raid Volume", 00:13:24.840 "block_size": 512, 00:13:24.840 "num_blocks": 262144, 00:13:24.840 "uuid": "f9e4b060-47b3-44ee-847d-04a890324705", 00:13:24.840 "assigned_rate_limits": { 00:13:24.840 "rw_ios_per_sec": 0, 00:13:24.840 "rw_mbytes_per_sec": 0, 00:13:24.840 "r_mbytes_per_sec": 0, 00:13:24.840 "w_mbytes_per_sec": 0 00:13:24.840 }, 00:13:24.840 "claimed": false, 00:13:24.840 "zoned": false, 00:13:24.840 "supported_io_types": { 00:13:24.840 "read": true, 00:13:24.840 "write": true, 00:13:24.840 "unmap": true, 00:13:24.840 "flush": true, 00:13:24.840 "reset": true, 00:13:24.840 "nvme_admin": false, 00:13:24.840 "nvme_io": false, 00:13:24.840 "nvme_io_md": false, 00:13:24.840 "write_zeroes": true, 00:13:24.840 "zcopy": false, 00:13:24.840 "get_zone_info": false, 00:13:24.840 "zone_management": false, 00:13:24.840 "zone_append": false, 00:13:24.840 "compare": false, 00:13:24.840 "compare_and_write": false, 00:13:24.840 "abort": false, 00:13:24.840 "seek_hole": false, 00:13:24.840 "seek_data": false, 00:13:24.840 "copy": false, 00:13:24.840 "nvme_iov_md": false 00:13:24.840 }, 00:13:24.840 "memory_domains": [ 00:13:24.840 { 00:13:24.840 "dma_device_id": "system", 00:13:24.840 "dma_device_type": 1 00:13:24.840 }, 00:13:24.840 { 00:13:24.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.840 "dma_device_type": 2 00:13:24.840 }, 00:13:24.840 { 00:13:24.840 "dma_device_id": "system", 00:13:24.840 "dma_device_type": 1 00:13:24.840 }, 00:13:24.840 { 00:13:24.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.840 "dma_device_type": 2 00:13:24.840 }, 00:13:24.840 { 00:13:24.840 "dma_device_id": "system", 00:13:24.840 "dma_device_type": 1 00:13:24.840 }, 00:13:24.840 { 00:13:24.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.840 "dma_device_type": 2 00:13:24.840 }, 00:13:24.840 { 00:13:24.840 "dma_device_id": "system", 00:13:24.840 "dma_device_type": 1 00:13:24.840 }, 00:13:24.840 { 00:13:24.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.840 "dma_device_type": 2 00:13:24.840 } 00:13:24.840 ], 00:13:24.840 "driver_specific": { 00:13:24.840 "raid": { 00:13:24.840 "uuid": "f9e4b060-47b3-44ee-847d-04a890324705", 00:13:24.840 "strip_size_kb": 64, 00:13:24.840 "state": "online", 00:13:24.840 "raid_level": "raid0", 00:13:24.840 "superblock": false, 00:13:24.840 "num_base_bdevs": 4, 00:13:24.840 "num_base_bdevs_discovered": 4, 00:13:24.840 "num_base_bdevs_operational": 4, 00:13:24.840 "base_bdevs_list": [ 00:13:24.840 { 00:13:24.840 "name": "NewBaseBdev", 00:13:24.840 "uuid": "477247e5-ea49-4816-87d1-8d64a78e62f8", 00:13:24.840 "is_configured": true, 00:13:24.840 "data_offset": 0, 00:13:24.840 "data_size": 65536 00:13:24.840 }, 00:13:24.840 { 00:13:24.840 "name": "BaseBdev2", 00:13:24.840 "uuid": "26f06122-44a6-42d2-a13d-16e2904b042d", 00:13:24.840 "is_configured": true, 00:13:24.840 "data_offset": 0, 00:13:24.840 "data_size": 65536 00:13:24.840 }, 00:13:24.840 { 00:13:24.840 "name": "BaseBdev3", 00:13:24.840 "uuid": "67556a45-dff5-4bc8-a6e7-e32a2de44fd8", 00:13:24.840 "is_configured": true, 00:13:24.840 "data_offset": 0, 00:13:24.840 "data_size": 65536 00:13:24.840 }, 00:13:24.840 { 00:13:24.840 "name": "BaseBdev4", 00:13:24.840 "uuid": "7a148889-afff-466d-bb9c-257d101077d9", 00:13:24.840 "is_configured": true, 00:13:24.840 "data_offset": 0, 00:13:24.840 "data_size": 65536 00:13:24.840 } 00:13:24.840 ] 00:13:24.840 } 00:13:24.840 } 00:13:24.840 }' 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:24.840 BaseBdev2 00:13:24.840 BaseBdev3 00:13:24.840 BaseBdev4' 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.840 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.101 [2024-11-27 08:45:21.760385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:25.101 [2024-11-27 08:45:21.760572] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.101 [2024-11-27 08:45:21.760716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.101 [2024-11-27 08:45:21.760817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.101 [2024-11-27 08:45:21.760835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69599 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' -z 69599 ']' 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # kill -0 69599 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # uname 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 69599 00:13:25.101 killing process with pid 69599 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 69599' 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # kill 69599 00:13:25.101 [2024-11-27 08:45:21.795257] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:25.101 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@975 -- # wait 69599 00:13:25.674 [2024-11-27 08:45:22.174074] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:26.609 00:13:26.609 real 0m13.359s 00:13:26.609 user 0m21.973s 00:13:26.609 sys 0m1.989s 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:13:26.609 ************************************ 00:13:26.609 END TEST raid_state_function_test 00:13:26.609 ************************************ 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.609 08:45:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:26.609 08:45:23 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:13:26.609 08:45:23 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:13:26.609 08:45:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:26.609 ************************************ 00:13:26.609 START TEST raid_state_function_test_sb 00:13:26.609 ************************************ 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # raid_state_function_test raid0 4 true 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70295 00:13:26.609 Process raid pid: 70295 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70295' 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70295 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # '[' -z 70295 ']' 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local max_retries=100 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # xtrace_disable 00:13:26.609 08:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.868 [2024-11-27 08:45:23.474711] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:13:26.868 [2024-11-27 08:45:23.474945] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.127 [2024-11-27 08:45:23.662440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.127 [2024-11-27 08:45:23.859400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.386 [2024-11-27 08:45:24.115562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.386 [2024-11-27 08:45:24.115613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@865 -- # return 0 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.952 [2024-11-27 08:45:24.522744] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:27.952 [2024-11-27 08:45:24.522826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:27.952 [2024-11-27 08:45:24.522844] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:27.952 [2024-11-27 08:45:24.522861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:27.952 [2024-11-27 08:45:24.522872] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:27.952 [2024-11-27 08:45:24.522887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:27.952 [2024-11-27 08:45:24.522898] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:27.952 [2024-11-27 08:45:24.522913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.952 08:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.952 "name": "Existed_Raid", 00:13:27.952 "uuid": "feb8e39d-f9a9-4f2c-b18e-097f08a45969", 00:13:27.952 "strip_size_kb": 64, 00:13:27.952 "state": "configuring", 00:13:27.952 "raid_level": "raid0", 00:13:27.952 "superblock": true, 00:13:27.952 "num_base_bdevs": 4, 00:13:27.952 "num_base_bdevs_discovered": 0, 00:13:27.953 "num_base_bdevs_operational": 4, 00:13:27.953 "base_bdevs_list": [ 00:13:27.953 { 00:13:27.953 "name": "BaseBdev1", 00:13:27.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.953 "is_configured": false, 00:13:27.953 "data_offset": 0, 00:13:27.953 "data_size": 0 00:13:27.953 }, 00:13:27.953 { 00:13:27.953 "name": "BaseBdev2", 00:13:27.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.953 "is_configured": false, 00:13:27.953 "data_offset": 0, 00:13:27.953 "data_size": 0 00:13:27.953 }, 00:13:27.953 { 00:13:27.953 "name": "BaseBdev3", 00:13:27.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.953 "is_configured": false, 00:13:27.953 "data_offset": 0, 00:13:27.953 "data_size": 0 00:13:27.953 }, 00:13:27.953 { 00:13:27.953 "name": "BaseBdev4", 00:13:27.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.953 "is_configured": false, 00:13:27.953 "data_offset": 0, 00:13:27.953 "data_size": 0 00:13:27.953 } 00:13:27.953 ] 00:13:27.953 }' 00:13:27.953 08:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.953 08:45:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.521 [2024-11-27 08:45:25.114831] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:28.521 [2024-11-27 08:45:25.115068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.521 [2024-11-27 08:45:25.122815] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:28.521 [2024-11-27 08:45:25.122873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:28.521 [2024-11-27 08:45:25.122889] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:28.521 [2024-11-27 08:45:25.122906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:28.521 [2024-11-27 08:45:25.122916] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:28.521 [2024-11-27 08:45:25.122931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:28.521 [2024-11-27 08:45:25.122941] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:28.521 [2024-11-27 08:45:25.122955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.521 [2024-11-27 08:45:25.171574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.521 BaseBdev1 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.521 [ 00:13:28.521 { 00:13:28.521 "name": "BaseBdev1", 00:13:28.521 "aliases": [ 00:13:28.521 "ae955e5b-815a-406a-8fc5-90a3fd74f448" 00:13:28.521 ], 00:13:28.521 "product_name": "Malloc disk", 00:13:28.521 "block_size": 512, 00:13:28.521 "num_blocks": 65536, 00:13:28.521 "uuid": "ae955e5b-815a-406a-8fc5-90a3fd74f448", 00:13:28.521 "assigned_rate_limits": { 00:13:28.521 "rw_ios_per_sec": 0, 00:13:28.521 "rw_mbytes_per_sec": 0, 00:13:28.521 "r_mbytes_per_sec": 0, 00:13:28.521 "w_mbytes_per_sec": 0 00:13:28.521 }, 00:13:28.521 "claimed": true, 00:13:28.521 "claim_type": "exclusive_write", 00:13:28.521 "zoned": false, 00:13:28.521 "supported_io_types": { 00:13:28.521 "read": true, 00:13:28.521 "write": true, 00:13:28.521 "unmap": true, 00:13:28.521 "flush": true, 00:13:28.521 "reset": true, 00:13:28.521 "nvme_admin": false, 00:13:28.521 "nvme_io": false, 00:13:28.521 "nvme_io_md": false, 00:13:28.521 "write_zeroes": true, 00:13:28.521 "zcopy": true, 00:13:28.521 "get_zone_info": false, 00:13:28.521 "zone_management": false, 00:13:28.521 "zone_append": false, 00:13:28.521 "compare": false, 00:13:28.521 "compare_and_write": false, 00:13:28.521 "abort": true, 00:13:28.521 "seek_hole": false, 00:13:28.521 "seek_data": false, 00:13:28.521 "copy": true, 00:13:28.521 "nvme_iov_md": false 00:13:28.521 }, 00:13:28.521 "memory_domains": [ 00:13:28.521 { 00:13:28.521 "dma_device_id": "system", 00:13:28.521 "dma_device_type": 1 00:13:28.521 }, 00:13:28.521 { 00:13:28.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.521 "dma_device_type": 2 00:13:28.521 } 00:13:28.521 ], 00:13:28.521 "driver_specific": {} 00:13:28.521 } 00:13:28.521 ] 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.521 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.521 "name": "Existed_Raid", 00:13:28.521 "uuid": "62514682-f65d-4564-b225-91c676d2d9a2", 00:13:28.521 "strip_size_kb": 64, 00:13:28.521 "state": "configuring", 00:13:28.521 "raid_level": "raid0", 00:13:28.521 "superblock": true, 00:13:28.521 "num_base_bdevs": 4, 00:13:28.521 "num_base_bdevs_discovered": 1, 00:13:28.521 "num_base_bdevs_operational": 4, 00:13:28.521 "base_bdevs_list": [ 00:13:28.521 { 00:13:28.521 "name": "BaseBdev1", 00:13:28.521 "uuid": "ae955e5b-815a-406a-8fc5-90a3fd74f448", 00:13:28.521 "is_configured": true, 00:13:28.521 "data_offset": 2048, 00:13:28.521 "data_size": 63488 00:13:28.521 }, 00:13:28.521 { 00:13:28.521 "name": "BaseBdev2", 00:13:28.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.521 "is_configured": false, 00:13:28.521 "data_offset": 0, 00:13:28.521 "data_size": 0 00:13:28.521 }, 00:13:28.521 { 00:13:28.521 "name": "BaseBdev3", 00:13:28.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.521 "is_configured": false, 00:13:28.521 "data_offset": 0, 00:13:28.521 "data_size": 0 00:13:28.521 }, 00:13:28.521 { 00:13:28.521 "name": "BaseBdev4", 00:13:28.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.521 "is_configured": false, 00:13:28.521 "data_offset": 0, 00:13:28.521 "data_size": 0 00:13:28.521 } 00:13:28.521 ] 00:13:28.521 }' 00:13:28.522 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.522 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.091 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:29.091 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.091 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.091 [2024-11-27 08:45:25.743760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:29.091 [2024-11-27 08:45:25.743844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.092 [2024-11-27 08:45:25.751823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.092 [2024-11-27 08:45:25.754527] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:29.092 [2024-11-27 08:45:25.754639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:29.092 [2024-11-27 08:45:25.754663] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:29.092 [2024-11-27 08:45:25.754684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:29.092 [2024-11-27 08:45:25.754694] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:29.092 [2024-11-27 08:45:25.754708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.092 "name": "Existed_Raid", 00:13:29.092 "uuid": "aea13c5e-bf6a-4bd0-908f-0cce0be4540b", 00:13:29.092 "strip_size_kb": 64, 00:13:29.092 "state": "configuring", 00:13:29.092 "raid_level": "raid0", 00:13:29.092 "superblock": true, 00:13:29.092 "num_base_bdevs": 4, 00:13:29.092 "num_base_bdevs_discovered": 1, 00:13:29.092 "num_base_bdevs_operational": 4, 00:13:29.092 "base_bdevs_list": [ 00:13:29.092 { 00:13:29.092 "name": "BaseBdev1", 00:13:29.092 "uuid": "ae955e5b-815a-406a-8fc5-90a3fd74f448", 00:13:29.092 "is_configured": true, 00:13:29.092 "data_offset": 2048, 00:13:29.092 "data_size": 63488 00:13:29.092 }, 00:13:29.092 { 00:13:29.092 "name": "BaseBdev2", 00:13:29.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.092 "is_configured": false, 00:13:29.092 "data_offset": 0, 00:13:29.092 "data_size": 0 00:13:29.092 }, 00:13:29.092 { 00:13:29.092 "name": "BaseBdev3", 00:13:29.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.092 "is_configured": false, 00:13:29.092 "data_offset": 0, 00:13:29.092 "data_size": 0 00:13:29.092 }, 00:13:29.092 { 00:13:29.092 "name": "BaseBdev4", 00:13:29.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.092 "is_configured": false, 00:13:29.092 "data_offset": 0, 00:13:29.092 "data_size": 0 00:13:29.092 } 00:13:29.092 ] 00:13:29.092 }' 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.092 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.661 [2024-11-27 08:45:26.338120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.661 BaseBdev2 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.661 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.661 [ 00:13:29.661 { 00:13:29.661 "name": "BaseBdev2", 00:13:29.661 "aliases": [ 00:13:29.661 "bc5e527d-33ec-4dad-b853-b4c37f24f0d9" 00:13:29.661 ], 00:13:29.661 "product_name": "Malloc disk", 00:13:29.661 "block_size": 512, 00:13:29.661 "num_blocks": 65536, 00:13:29.661 "uuid": "bc5e527d-33ec-4dad-b853-b4c37f24f0d9", 00:13:29.661 "assigned_rate_limits": { 00:13:29.661 "rw_ios_per_sec": 0, 00:13:29.661 "rw_mbytes_per_sec": 0, 00:13:29.661 "r_mbytes_per_sec": 0, 00:13:29.661 "w_mbytes_per_sec": 0 00:13:29.661 }, 00:13:29.661 "claimed": true, 00:13:29.661 "claim_type": "exclusive_write", 00:13:29.661 "zoned": false, 00:13:29.661 "supported_io_types": { 00:13:29.661 "read": true, 00:13:29.661 "write": true, 00:13:29.661 "unmap": true, 00:13:29.661 "flush": true, 00:13:29.661 "reset": true, 00:13:29.661 "nvme_admin": false, 00:13:29.661 "nvme_io": false, 00:13:29.661 "nvme_io_md": false, 00:13:29.661 "write_zeroes": true, 00:13:29.661 "zcopy": true, 00:13:29.661 "get_zone_info": false, 00:13:29.661 "zone_management": false, 00:13:29.661 "zone_append": false, 00:13:29.661 "compare": false, 00:13:29.661 "compare_and_write": false, 00:13:29.661 "abort": true, 00:13:29.661 "seek_hole": false, 00:13:29.661 "seek_data": false, 00:13:29.661 "copy": true, 00:13:29.661 "nvme_iov_md": false 00:13:29.661 }, 00:13:29.661 "memory_domains": [ 00:13:29.661 { 00:13:29.662 "dma_device_id": "system", 00:13:29.662 "dma_device_type": 1 00:13:29.662 }, 00:13:29.662 { 00:13:29.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.662 "dma_device_type": 2 00:13:29.662 } 00:13:29.662 ], 00:13:29.662 "driver_specific": {} 00:13:29.662 } 00:13:29.662 ] 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.662 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.920 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.920 "name": "Existed_Raid", 00:13:29.920 "uuid": "aea13c5e-bf6a-4bd0-908f-0cce0be4540b", 00:13:29.920 "strip_size_kb": 64, 00:13:29.920 "state": "configuring", 00:13:29.920 "raid_level": "raid0", 00:13:29.920 "superblock": true, 00:13:29.920 "num_base_bdevs": 4, 00:13:29.920 "num_base_bdevs_discovered": 2, 00:13:29.920 "num_base_bdevs_operational": 4, 00:13:29.920 "base_bdevs_list": [ 00:13:29.920 { 00:13:29.920 "name": "BaseBdev1", 00:13:29.920 "uuid": "ae955e5b-815a-406a-8fc5-90a3fd74f448", 00:13:29.920 "is_configured": true, 00:13:29.920 "data_offset": 2048, 00:13:29.920 "data_size": 63488 00:13:29.920 }, 00:13:29.920 { 00:13:29.920 "name": "BaseBdev2", 00:13:29.920 "uuid": "bc5e527d-33ec-4dad-b853-b4c37f24f0d9", 00:13:29.920 "is_configured": true, 00:13:29.920 "data_offset": 2048, 00:13:29.920 "data_size": 63488 00:13:29.920 }, 00:13:29.920 { 00:13:29.920 "name": "BaseBdev3", 00:13:29.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.920 "is_configured": false, 00:13:29.920 "data_offset": 0, 00:13:29.920 "data_size": 0 00:13:29.920 }, 00:13:29.920 { 00:13:29.920 "name": "BaseBdev4", 00:13:29.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.920 "is_configured": false, 00:13:29.920 "data_offset": 0, 00:13:29.920 "data_size": 0 00:13:29.920 } 00:13:29.921 ] 00:13:29.921 }' 00:13:29.921 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.921 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.180 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:30.180 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.180 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.439 [2024-11-27 08:45:26.959671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:30.439 BaseBdev3 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.439 [ 00:13:30.439 { 00:13:30.439 "name": "BaseBdev3", 00:13:30.439 "aliases": [ 00:13:30.439 "17bb7835-4487-4c98-9af4-6a8df3ca4099" 00:13:30.439 ], 00:13:30.439 "product_name": "Malloc disk", 00:13:30.439 "block_size": 512, 00:13:30.439 "num_blocks": 65536, 00:13:30.439 "uuid": "17bb7835-4487-4c98-9af4-6a8df3ca4099", 00:13:30.439 "assigned_rate_limits": { 00:13:30.439 "rw_ios_per_sec": 0, 00:13:30.439 "rw_mbytes_per_sec": 0, 00:13:30.439 "r_mbytes_per_sec": 0, 00:13:30.439 "w_mbytes_per_sec": 0 00:13:30.439 }, 00:13:30.439 "claimed": true, 00:13:30.439 "claim_type": "exclusive_write", 00:13:30.439 "zoned": false, 00:13:30.439 "supported_io_types": { 00:13:30.439 "read": true, 00:13:30.439 "write": true, 00:13:30.439 "unmap": true, 00:13:30.439 "flush": true, 00:13:30.439 "reset": true, 00:13:30.439 "nvme_admin": false, 00:13:30.439 "nvme_io": false, 00:13:30.439 "nvme_io_md": false, 00:13:30.439 "write_zeroes": true, 00:13:30.439 "zcopy": true, 00:13:30.439 "get_zone_info": false, 00:13:30.439 "zone_management": false, 00:13:30.439 "zone_append": false, 00:13:30.439 "compare": false, 00:13:30.439 "compare_and_write": false, 00:13:30.439 "abort": true, 00:13:30.439 "seek_hole": false, 00:13:30.439 "seek_data": false, 00:13:30.439 "copy": true, 00:13:30.439 "nvme_iov_md": false 00:13:30.439 }, 00:13:30.439 "memory_domains": [ 00:13:30.439 { 00:13:30.439 "dma_device_id": "system", 00:13:30.439 "dma_device_type": 1 00:13:30.439 }, 00:13:30.439 { 00:13:30.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.439 "dma_device_type": 2 00:13:30.439 } 00:13:30.439 ], 00:13:30.439 "driver_specific": {} 00:13:30.439 } 00:13:30.439 ] 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:30.439 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:30.439 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.439 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.439 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:30.439 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.439 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.439 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.439 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.439 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.439 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.439 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.439 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.439 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.439 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.439 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.439 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.439 "name": "Existed_Raid", 00:13:30.439 "uuid": "aea13c5e-bf6a-4bd0-908f-0cce0be4540b", 00:13:30.439 "strip_size_kb": 64, 00:13:30.439 "state": "configuring", 00:13:30.439 "raid_level": "raid0", 00:13:30.439 "superblock": true, 00:13:30.439 "num_base_bdevs": 4, 00:13:30.439 "num_base_bdevs_discovered": 3, 00:13:30.439 "num_base_bdevs_operational": 4, 00:13:30.439 "base_bdevs_list": [ 00:13:30.439 { 00:13:30.439 "name": "BaseBdev1", 00:13:30.439 "uuid": "ae955e5b-815a-406a-8fc5-90a3fd74f448", 00:13:30.440 "is_configured": true, 00:13:30.440 "data_offset": 2048, 00:13:30.440 "data_size": 63488 00:13:30.440 }, 00:13:30.440 { 00:13:30.440 "name": "BaseBdev2", 00:13:30.440 "uuid": "bc5e527d-33ec-4dad-b853-b4c37f24f0d9", 00:13:30.440 "is_configured": true, 00:13:30.440 "data_offset": 2048, 00:13:30.440 "data_size": 63488 00:13:30.440 }, 00:13:30.440 { 00:13:30.440 "name": "BaseBdev3", 00:13:30.440 "uuid": "17bb7835-4487-4c98-9af4-6a8df3ca4099", 00:13:30.440 "is_configured": true, 00:13:30.440 "data_offset": 2048, 00:13:30.440 "data_size": 63488 00:13:30.440 }, 00:13:30.440 { 00:13:30.440 "name": "BaseBdev4", 00:13:30.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.440 "is_configured": false, 00:13:30.440 "data_offset": 0, 00:13:30.440 "data_size": 0 00:13:30.440 } 00:13:30.440 ] 00:13:30.440 }' 00:13:30.440 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.440 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.011 [2024-11-27 08:45:27.549930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:31.011 [2024-11-27 08:45:27.550330] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:31.011 [2024-11-27 08:45:27.550379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:31.011 BaseBdev4 00:13:31.011 [2024-11-27 08:45:27.550742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:31.011 [2024-11-27 08:45:27.550968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:31.011 [2024-11-27 08:45:27.550993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:31.011 [2024-11-27 08:45:27.551194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.011 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.011 [ 00:13:31.011 { 00:13:31.011 "name": "BaseBdev4", 00:13:31.011 "aliases": [ 00:13:31.011 "4f8aa5e1-0c83-4d10-8e75-c9a5bc45622e" 00:13:31.011 ], 00:13:31.011 "product_name": "Malloc disk", 00:13:31.011 "block_size": 512, 00:13:31.011 "num_blocks": 65536, 00:13:31.011 "uuid": "4f8aa5e1-0c83-4d10-8e75-c9a5bc45622e", 00:13:31.011 "assigned_rate_limits": { 00:13:31.011 "rw_ios_per_sec": 0, 00:13:31.011 "rw_mbytes_per_sec": 0, 00:13:31.011 "r_mbytes_per_sec": 0, 00:13:31.011 "w_mbytes_per_sec": 0 00:13:31.011 }, 00:13:31.011 "claimed": true, 00:13:31.011 "claim_type": "exclusive_write", 00:13:31.011 "zoned": false, 00:13:31.011 "supported_io_types": { 00:13:31.011 "read": true, 00:13:31.011 "write": true, 00:13:31.011 "unmap": true, 00:13:31.011 "flush": true, 00:13:31.011 "reset": true, 00:13:31.011 "nvme_admin": false, 00:13:31.011 "nvme_io": false, 00:13:31.011 "nvme_io_md": false, 00:13:31.011 "write_zeroes": true, 00:13:31.011 "zcopy": true, 00:13:31.011 "get_zone_info": false, 00:13:31.011 "zone_management": false, 00:13:31.011 "zone_append": false, 00:13:31.011 "compare": false, 00:13:31.011 "compare_and_write": false, 00:13:31.011 "abort": true, 00:13:31.011 "seek_hole": false, 00:13:31.011 "seek_data": false, 00:13:31.011 "copy": true, 00:13:31.011 "nvme_iov_md": false 00:13:31.011 }, 00:13:31.011 "memory_domains": [ 00:13:31.011 { 00:13:31.011 "dma_device_id": "system", 00:13:31.011 "dma_device_type": 1 00:13:31.011 }, 00:13:31.011 { 00:13:31.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.011 "dma_device_type": 2 00:13:31.011 } 00:13:31.011 ], 00:13:31.011 "driver_specific": {} 00:13:31.011 } 00:13:31.011 ] 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.012 "name": "Existed_Raid", 00:13:31.012 "uuid": "aea13c5e-bf6a-4bd0-908f-0cce0be4540b", 00:13:31.012 "strip_size_kb": 64, 00:13:31.012 "state": "online", 00:13:31.012 "raid_level": "raid0", 00:13:31.012 "superblock": true, 00:13:31.012 "num_base_bdevs": 4, 00:13:31.012 "num_base_bdevs_discovered": 4, 00:13:31.012 "num_base_bdevs_operational": 4, 00:13:31.012 "base_bdevs_list": [ 00:13:31.012 { 00:13:31.012 "name": "BaseBdev1", 00:13:31.012 "uuid": "ae955e5b-815a-406a-8fc5-90a3fd74f448", 00:13:31.012 "is_configured": true, 00:13:31.012 "data_offset": 2048, 00:13:31.012 "data_size": 63488 00:13:31.012 }, 00:13:31.012 { 00:13:31.012 "name": "BaseBdev2", 00:13:31.012 "uuid": "bc5e527d-33ec-4dad-b853-b4c37f24f0d9", 00:13:31.012 "is_configured": true, 00:13:31.012 "data_offset": 2048, 00:13:31.012 "data_size": 63488 00:13:31.012 }, 00:13:31.012 { 00:13:31.012 "name": "BaseBdev3", 00:13:31.012 "uuid": "17bb7835-4487-4c98-9af4-6a8df3ca4099", 00:13:31.012 "is_configured": true, 00:13:31.012 "data_offset": 2048, 00:13:31.012 "data_size": 63488 00:13:31.012 }, 00:13:31.012 { 00:13:31.012 "name": "BaseBdev4", 00:13:31.012 "uuid": "4f8aa5e1-0c83-4d10-8e75-c9a5bc45622e", 00:13:31.012 "is_configured": true, 00:13:31.012 "data_offset": 2048, 00:13:31.012 "data_size": 63488 00:13:31.012 } 00:13:31.012 ] 00:13:31.012 }' 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.012 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.580 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:31.580 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:31.580 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:31.580 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:31.580 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:31.580 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:31.580 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:31.580 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.580 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.580 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:31.580 [2024-11-27 08:45:28.090636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.580 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.580 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:31.580 "name": "Existed_Raid", 00:13:31.580 "aliases": [ 00:13:31.580 "aea13c5e-bf6a-4bd0-908f-0cce0be4540b" 00:13:31.580 ], 00:13:31.580 "product_name": "Raid Volume", 00:13:31.580 "block_size": 512, 00:13:31.580 "num_blocks": 253952, 00:13:31.580 "uuid": "aea13c5e-bf6a-4bd0-908f-0cce0be4540b", 00:13:31.580 "assigned_rate_limits": { 00:13:31.580 "rw_ios_per_sec": 0, 00:13:31.580 "rw_mbytes_per_sec": 0, 00:13:31.580 "r_mbytes_per_sec": 0, 00:13:31.580 "w_mbytes_per_sec": 0 00:13:31.580 }, 00:13:31.580 "claimed": false, 00:13:31.580 "zoned": false, 00:13:31.580 "supported_io_types": { 00:13:31.580 "read": true, 00:13:31.580 "write": true, 00:13:31.580 "unmap": true, 00:13:31.580 "flush": true, 00:13:31.580 "reset": true, 00:13:31.580 "nvme_admin": false, 00:13:31.580 "nvme_io": false, 00:13:31.580 "nvme_io_md": false, 00:13:31.580 "write_zeroes": true, 00:13:31.580 "zcopy": false, 00:13:31.580 "get_zone_info": false, 00:13:31.580 "zone_management": false, 00:13:31.580 "zone_append": false, 00:13:31.580 "compare": false, 00:13:31.580 "compare_and_write": false, 00:13:31.580 "abort": false, 00:13:31.580 "seek_hole": false, 00:13:31.580 "seek_data": false, 00:13:31.580 "copy": false, 00:13:31.580 "nvme_iov_md": false 00:13:31.580 }, 00:13:31.580 "memory_domains": [ 00:13:31.580 { 00:13:31.580 "dma_device_id": "system", 00:13:31.580 "dma_device_type": 1 00:13:31.580 }, 00:13:31.580 { 00:13:31.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.580 "dma_device_type": 2 00:13:31.580 }, 00:13:31.580 { 00:13:31.580 "dma_device_id": "system", 00:13:31.580 "dma_device_type": 1 00:13:31.580 }, 00:13:31.580 { 00:13:31.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.580 "dma_device_type": 2 00:13:31.580 }, 00:13:31.580 { 00:13:31.580 "dma_device_id": "system", 00:13:31.580 "dma_device_type": 1 00:13:31.580 }, 00:13:31.580 { 00:13:31.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.580 "dma_device_type": 2 00:13:31.580 }, 00:13:31.580 { 00:13:31.580 "dma_device_id": "system", 00:13:31.580 "dma_device_type": 1 00:13:31.580 }, 00:13:31.580 { 00:13:31.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.580 "dma_device_type": 2 00:13:31.580 } 00:13:31.580 ], 00:13:31.580 "driver_specific": { 00:13:31.580 "raid": { 00:13:31.580 "uuid": "aea13c5e-bf6a-4bd0-908f-0cce0be4540b", 00:13:31.580 "strip_size_kb": 64, 00:13:31.580 "state": "online", 00:13:31.580 "raid_level": "raid0", 00:13:31.580 "superblock": true, 00:13:31.580 "num_base_bdevs": 4, 00:13:31.580 "num_base_bdevs_discovered": 4, 00:13:31.580 "num_base_bdevs_operational": 4, 00:13:31.580 "base_bdevs_list": [ 00:13:31.580 { 00:13:31.580 "name": "BaseBdev1", 00:13:31.580 "uuid": "ae955e5b-815a-406a-8fc5-90a3fd74f448", 00:13:31.580 "is_configured": true, 00:13:31.580 "data_offset": 2048, 00:13:31.580 "data_size": 63488 00:13:31.580 }, 00:13:31.580 { 00:13:31.580 "name": "BaseBdev2", 00:13:31.580 "uuid": "bc5e527d-33ec-4dad-b853-b4c37f24f0d9", 00:13:31.580 "is_configured": true, 00:13:31.580 "data_offset": 2048, 00:13:31.580 "data_size": 63488 00:13:31.580 }, 00:13:31.580 { 00:13:31.580 "name": "BaseBdev3", 00:13:31.580 "uuid": "17bb7835-4487-4c98-9af4-6a8df3ca4099", 00:13:31.580 "is_configured": true, 00:13:31.580 "data_offset": 2048, 00:13:31.580 "data_size": 63488 00:13:31.580 }, 00:13:31.580 { 00:13:31.580 "name": "BaseBdev4", 00:13:31.580 "uuid": "4f8aa5e1-0c83-4d10-8e75-c9a5bc45622e", 00:13:31.580 "is_configured": true, 00:13:31.580 "data_offset": 2048, 00:13:31.581 "data_size": 63488 00:13:31.581 } 00:13:31.581 ] 00:13:31.581 } 00:13:31.581 } 00:13:31.581 }' 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:31.581 BaseBdev2 00:13:31.581 BaseBdev3 00:13:31.581 BaseBdev4' 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.581 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.840 [2024-11-27 08:45:28.466393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:31.840 [2024-11-27 08:45:28.466451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.840 [2024-11-27 08:45:28.466531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.840 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.099 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.099 "name": "Existed_Raid", 00:13:32.099 "uuid": "aea13c5e-bf6a-4bd0-908f-0cce0be4540b", 00:13:32.099 "strip_size_kb": 64, 00:13:32.099 "state": "offline", 00:13:32.099 "raid_level": "raid0", 00:13:32.099 "superblock": true, 00:13:32.099 "num_base_bdevs": 4, 00:13:32.099 "num_base_bdevs_discovered": 3, 00:13:32.099 "num_base_bdevs_operational": 3, 00:13:32.099 "base_bdevs_list": [ 00:13:32.099 { 00:13:32.099 "name": null, 00:13:32.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.099 "is_configured": false, 00:13:32.099 "data_offset": 0, 00:13:32.099 "data_size": 63488 00:13:32.099 }, 00:13:32.099 { 00:13:32.099 "name": "BaseBdev2", 00:13:32.099 "uuid": "bc5e527d-33ec-4dad-b853-b4c37f24f0d9", 00:13:32.099 "is_configured": true, 00:13:32.099 "data_offset": 2048, 00:13:32.099 "data_size": 63488 00:13:32.099 }, 00:13:32.099 { 00:13:32.099 "name": "BaseBdev3", 00:13:32.099 "uuid": "17bb7835-4487-4c98-9af4-6a8df3ca4099", 00:13:32.099 "is_configured": true, 00:13:32.099 "data_offset": 2048, 00:13:32.099 "data_size": 63488 00:13:32.099 }, 00:13:32.099 { 00:13:32.099 "name": "BaseBdev4", 00:13:32.099 "uuid": "4f8aa5e1-0c83-4d10-8e75-c9a5bc45622e", 00:13:32.099 "is_configured": true, 00:13:32.099 "data_offset": 2048, 00:13:32.099 "data_size": 63488 00:13:32.099 } 00:13:32.099 ] 00:13:32.099 }' 00:13:32.099 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.099 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.357 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:32.357 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.357 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.357 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.357 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.357 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:32.357 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.614 [2024-11-27 08:45:29.146420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.614 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.614 [2024-11-27 08:45:29.295933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:32.871 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.871 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:32.871 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.871 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.871 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.871 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:32.871 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.871 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.871 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.872 [2024-11-27 08:45:29.447007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:32.872 [2024-11-27 08:45:29.447088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.872 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.131 BaseBdev2 00:13:33.131 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.131 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:33.131 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:13:33.131 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:33.131 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:13:33.131 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:33.131 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:33.131 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:33.131 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.131 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.131 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.131 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:33.131 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.131 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.131 [ 00:13:33.131 { 00:13:33.131 "name": "BaseBdev2", 00:13:33.131 "aliases": [ 00:13:33.131 "dfb6b71a-34da-48ce-982c-4b02f11bcd3b" 00:13:33.131 ], 00:13:33.131 "product_name": "Malloc disk", 00:13:33.131 "block_size": 512, 00:13:33.131 "num_blocks": 65536, 00:13:33.131 "uuid": "dfb6b71a-34da-48ce-982c-4b02f11bcd3b", 00:13:33.131 "assigned_rate_limits": { 00:13:33.131 "rw_ios_per_sec": 0, 00:13:33.131 "rw_mbytes_per_sec": 0, 00:13:33.131 "r_mbytes_per_sec": 0, 00:13:33.131 "w_mbytes_per_sec": 0 00:13:33.131 }, 00:13:33.131 "claimed": false, 00:13:33.131 "zoned": false, 00:13:33.131 "supported_io_types": { 00:13:33.131 "read": true, 00:13:33.131 "write": true, 00:13:33.131 "unmap": true, 00:13:33.131 "flush": true, 00:13:33.131 "reset": true, 00:13:33.131 "nvme_admin": false, 00:13:33.131 "nvme_io": false, 00:13:33.131 "nvme_io_md": false, 00:13:33.131 "write_zeroes": true, 00:13:33.131 "zcopy": true, 00:13:33.131 "get_zone_info": false, 00:13:33.131 "zone_management": false, 00:13:33.131 "zone_append": false, 00:13:33.131 "compare": false, 00:13:33.131 "compare_and_write": false, 00:13:33.131 "abort": true, 00:13:33.131 "seek_hole": false, 00:13:33.131 "seek_data": false, 00:13:33.132 "copy": true, 00:13:33.132 "nvme_iov_md": false 00:13:33.132 }, 00:13:33.132 "memory_domains": [ 00:13:33.132 { 00:13:33.132 "dma_device_id": "system", 00:13:33.132 "dma_device_type": 1 00:13:33.132 }, 00:13:33.132 { 00:13:33.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.132 "dma_device_type": 2 00:13:33.132 } 00:13:33.132 ], 00:13:33.132 "driver_specific": {} 00:13:33.132 } 00:13:33.132 ] 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.132 BaseBdev3 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.132 [ 00:13:33.132 { 00:13:33.132 "name": "BaseBdev3", 00:13:33.132 "aliases": [ 00:13:33.132 "94e50c91-0251-4cd0-9b73-4b5f61adb1e0" 00:13:33.132 ], 00:13:33.132 "product_name": "Malloc disk", 00:13:33.132 "block_size": 512, 00:13:33.132 "num_blocks": 65536, 00:13:33.132 "uuid": "94e50c91-0251-4cd0-9b73-4b5f61adb1e0", 00:13:33.132 "assigned_rate_limits": { 00:13:33.132 "rw_ios_per_sec": 0, 00:13:33.132 "rw_mbytes_per_sec": 0, 00:13:33.132 "r_mbytes_per_sec": 0, 00:13:33.132 "w_mbytes_per_sec": 0 00:13:33.132 }, 00:13:33.132 "claimed": false, 00:13:33.132 "zoned": false, 00:13:33.132 "supported_io_types": { 00:13:33.132 "read": true, 00:13:33.132 "write": true, 00:13:33.132 "unmap": true, 00:13:33.132 "flush": true, 00:13:33.132 "reset": true, 00:13:33.132 "nvme_admin": false, 00:13:33.132 "nvme_io": false, 00:13:33.132 "nvme_io_md": false, 00:13:33.132 "write_zeroes": true, 00:13:33.132 "zcopy": true, 00:13:33.132 "get_zone_info": false, 00:13:33.132 "zone_management": false, 00:13:33.132 "zone_append": false, 00:13:33.132 "compare": false, 00:13:33.132 "compare_and_write": false, 00:13:33.132 "abort": true, 00:13:33.132 "seek_hole": false, 00:13:33.132 "seek_data": false, 00:13:33.132 "copy": true, 00:13:33.132 "nvme_iov_md": false 00:13:33.132 }, 00:13:33.132 "memory_domains": [ 00:13:33.132 { 00:13:33.132 "dma_device_id": "system", 00:13:33.132 "dma_device_type": 1 00:13:33.132 }, 00:13:33.132 { 00:13:33.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.132 "dma_device_type": 2 00:13:33.132 } 00:13:33.132 ], 00:13:33.132 "driver_specific": {} 00:13:33.132 } 00:13:33.132 ] 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.132 BaseBdev4 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.132 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.132 [ 00:13:33.132 { 00:13:33.132 "name": "BaseBdev4", 00:13:33.132 "aliases": [ 00:13:33.132 "abcc984a-801c-4b08-926d-54b163b8dba8" 00:13:33.132 ], 00:13:33.132 "product_name": "Malloc disk", 00:13:33.132 "block_size": 512, 00:13:33.132 "num_blocks": 65536, 00:13:33.132 "uuid": "abcc984a-801c-4b08-926d-54b163b8dba8", 00:13:33.132 "assigned_rate_limits": { 00:13:33.132 "rw_ios_per_sec": 0, 00:13:33.132 "rw_mbytes_per_sec": 0, 00:13:33.132 "r_mbytes_per_sec": 0, 00:13:33.132 "w_mbytes_per_sec": 0 00:13:33.132 }, 00:13:33.132 "claimed": false, 00:13:33.132 "zoned": false, 00:13:33.132 "supported_io_types": { 00:13:33.132 "read": true, 00:13:33.132 "write": true, 00:13:33.132 "unmap": true, 00:13:33.132 "flush": true, 00:13:33.132 "reset": true, 00:13:33.132 "nvme_admin": false, 00:13:33.132 "nvme_io": false, 00:13:33.132 "nvme_io_md": false, 00:13:33.132 "write_zeroes": true, 00:13:33.132 "zcopy": true, 00:13:33.132 "get_zone_info": false, 00:13:33.132 "zone_management": false, 00:13:33.132 "zone_append": false, 00:13:33.132 "compare": false, 00:13:33.132 "compare_and_write": false, 00:13:33.132 "abort": true, 00:13:33.132 "seek_hole": false, 00:13:33.132 "seek_data": false, 00:13:33.132 "copy": true, 00:13:33.132 "nvme_iov_md": false 00:13:33.132 }, 00:13:33.132 "memory_domains": [ 00:13:33.132 { 00:13:33.132 "dma_device_id": "system", 00:13:33.132 "dma_device_type": 1 00:13:33.132 }, 00:13:33.132 { 00:13:33.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.132 "dma_device_type": 2 00:13:33.132 } 00:13:33.132 ], 00:13:33.132 "driver_specific": {} 00:13:33.133 } 00:13:33.133 ] 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.133 [2024-11-27 08:45:29.844298] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:33.133 [2024-11-27 08:45:29.844517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:33.133 [2024-11-27 08:45:29.844662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.133 [2024-11-27 08:45:29.847394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.133 [2024-11-27 08:45:29.847600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.133 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.392 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.392 "name": "Existed_Raid", 00:13:33.392 "uuid": "4081b123-26a2-4cb0-b757-94e2fa08af47", 00:13:33.392 "strip_size_kb": 64, 00:13:33.392 "state": "configuring", 00:13:33.392 "raid_level": "raid0", 00:13:33.392 "superblock": true, 00:13:33.392 "num_base_bdevs": 4, 00:13:33.392 "num_base_bdevs_discovered": 3, 00:13:33.392 "num_base_bdevs_operational": 4, 00:13:33.392 "base_bdevs_list": [ 00:13:33.392 { 00:13:33.392 "name": "BaseBdev1", 00:13:33.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.392 "is_configured": false, 00:13:33.392 "data_offset": 0, 00:13:33.392 "data_size": 0 00:13:33.392 }, 00:13:33.392 { 00:13:33.392 "name": "BaseBdev2", 00:13:33.392 "uuid": "dfb6b71a-34da-48ce-982c-4b02f11bcd3b", 00:13:33.392 "is_configured": true, 00:13:33.392 "data_offset": 2048, 00:13:33.392 "data_size": 63488 00:13:33.392 }, 00:13:33.392 { 00:13:33.392 "name": "BaseBdev3", 00:13:33.392 "uuid": "94e50c91-0251-4cd0-9b73-4b5f61adb1e0", 00:13:33.392 "is_configured": true, 00:13:33.392 "data_offset": 2048, 00:13:33.392 "data_size": 63488 00:13:33.392 }, 00:13:33.392 { 00:13:33.392 "name": "BaseBdev4", 00:13:33.392 "uuid": "abcc984a-801c-4b08-926d-54b163b8dba8", 00:13:33.392 "is_configured": true, 00:13:33.392 "data_offset": 2048, 00:13:33.392 "data_size": 63488 00:13:33.392 } 00:13:33.392 ] 00:13:33.392 }' 00:13:33.392 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.392 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.652 [2024-11-27 08:45:30.340471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.652 "name": "Existed_Raid", 00:13:33.652 "uuid": "4081b123-26a2-4cb0-b757-94e2fa08af47", 00:13:33.652 "strip_size_kb": 64, 00:13:33.652 "state": "configuring", 00:13:33.652 "raid_level": "raid0", 00:13:33.652 "superblock": true, 00:13:33.652 "num_base_bdevs": 4, 00:13:33.652 "num_base_bdevs_discovered": 2, 00:13:33.652 "num_base_bdevs_operational": 4, 00:13:33.652 "base_bdevs_list": [ 00:13:33.652 { 00:13:33.652 "name": "BaseBdev1", 00:13:33.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.652 "is_configured": false, 00:13:33.652 "data_offset": 0, 00:13:33.652 "data_size": 0 00:13:33.652 }, 00:13:33.652 { 00:13:33.652 "name": null, 00:13:33.652 "uuid": "dfb6b71a-34da-48ce-982c-4b02f11bcd3b", 00:13:33.652 "is_configured": false, 00:13:33.652 "data_offset": 0, 00:13:33.652 "data_size": 63488 00:13:33.652 }, 00:13:33.652 { 00:13:33.652 "name": "BaseBdev3", 00:13:33.652 "uuid": "94e50c91-0251-4cd0-9b73-4b5f61adb1e0", 00:13:33.652 "is_configured": true, 00:13:33.652 "data_offset": 2048, 00:13:33.652 "data_size": 63488 00:13:33.652 }, 00:13:33.652 { 00:13:33.652 "name": "BaseBdev4", 00:13:33.652 "uuid": "abcc984a-801c-4b08-926d-54b163b8dba8", 00:13:33.652 "is_configured": true, 00:13:33.652 "data_offset": 2048, 00:13:33.652 "data_size": 63488 00:13:33.652 } 00:13:33.652 ] 00:13:33.652 }' 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.652 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.321 [2024-11-27 08:45:30.969893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.321 BaseBdev1 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.321 [ 00:13:34.321 { 00:13:34.321 "name": "BaseBdev1", 00:13:34.321 "aliases": [ 00:13:34.321 "baf4039b-26ed-428e-b26d-965508263339" 00:13:34.321 ], 00:13:34.321 "product_name": "Malloc disk", 00:13:34.321 "block_size": 512, 00:13:34.321 "num_blocks": 65536, 00:13:34.321 "uuid": "baf4039b-26ed-428e-b26d-965508263339", 00:13:34.321 "assigned_rate_limits": { 00:13:34.321 "rw_ios_per_sec": 0, 00:13:34.321 "rw_mbytes_per_sec": 0, 00:13:34.321 "r_mbytes_per_sec": 0, 00:13:34.321 "w_mbytes_per_sec": 0 00:13:34.321 }, 00:13:34.321 "claimed": true, 00:13:34.321 "claim_type": "exclusive_write", 00:13:34.321 "zoned": false, 00:13:34.321 "supported_io_types": { 00:13:34.321 "read": true, 00:13:34.321 "write": true, 00:13:34.321 "unmap": true, 00:13:34.321 "flush": true, 00:13:34.321 "reset": true, 00:13:34.321 "nvme_admin": false, 00:13:34.321 "nvme_io": false, 00:13:34.321 "nvme_io_md": false, 00:13:34.321 "write_zeroes": true, 00:13:34.321 "zcopy": true, 00:13:34.321 "get_zone_info": false, 00:13:34.321 "zone_management": false, 00:13:34.321 "zone_append": false, 00:13:34.321 "compare": false, 00:13:34.321 "compare_and_write": false, 00:13:34.321 "abort": true, 00:13:34.321 "seek_hole": false, 00:13:34.321 "seek_data": false, 00:13:34.321 "copy": true, 00:13:34.321 "nvme_iov_md": false 00:13:34.321 }, 00:13:34.321 "memory_domains": [ 00:13:34.321 { 00:13:34.321 "dma_device_id": "system", 00:13:34.321 "dma_device_type": 1 00:13:34.321 }, 00:13:34.321 { 00:13:34.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.321 "dma_device_type": 2 00:13:34.321 } 00:13:34.321 ], 00:13:34.321 "driver_specific": {} 00:13:34.321 } 00:13:34.321 ] 00:13:34.321 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.321 "name": "Existed_Raid", 00:13:34.321 "uuid": "4081b123-26a2-4cb0-b757-94e2fa08af47", 00:13:34.321 "strip_size_kb": 64, 00:13:34.321 "state": "configuring", 00:13:34.321 "raid_level": "raid0", 00:13:34.321 "superblock": true, 00:13:34.321 "num_base_bdevs": 4, 00:13:34.321 "num_base_bdevs_discovered": 3, 00:13:34.321 "num_base_bdevs_operational": 4, 00:13:34.321 "base_bdevs_list": [ 00:13:34.321 { 00:13:34.321 "name": "BaseBdev1", 00:13:34.321 "uuid": "baf4039b-26ed-428e-b26d-965508263339", 00:13:34.321 "is_configured": true, 00:13:34.321 "data_offset": 2048, 00:13:34.321 "data_size": 63488 00:13:34.321 }, 00:13:34.321 { 00:13:34.321 "name": null, 00:13:34.321 "uuid": "dfb6b71a-34da-48ce-982c-4b02f11bcd3b", 00:13:34.321 "is_configured": false, 00:13:34.321 "data_offset": 0, 00:13:34.321 "data_size": 63488 00:13:34.321 }, 00:13:34.321 { 00:13:34.321 "name": "BaseBdev3", 00:13:34.321 "uuid": "94e50c91-0251-4cd0-9b73-4b5f61adb1e0", 00:13:34.321 "is_configured": true, 00:13:34.321 "data_offset": 2048, 00:13:34.321 "data_size": 63488 00:13:34.321 }, 00:13:34.321 { 00:13:34.321 "name": "BaseBdev4", 00:13:34.321 "uuid": "abcc984a-801c-4b08-926d-54b163b8dba8", 00:13:34.321 "is_configured": true, 00:13:34.321 "data_offset": 2048, 00:13:34.321 "data_size": 63488 00:13:34.321 } 00:13:34.321 ] 00:13:34.321 }' 00:13:34.321 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.322 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.890 [2024-11-27 08:45:31.570116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.890 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.890 "name": "Existed_Raid", 00:13:34.890 "uuid": "4081b123-26a2-4cb0-b757-94e2fa08af47", 00:13:34.890 "strip_size_kb": 64, 00:13:34.890 "state": "configuring", 00:13:34.890 "raid_level": "raid0", 00:13:34.890 "superblock": true, 00:13:34.890 "num_base_bdevs": 4, 00:13:34.890 "num_base_bdevs_discovered": 2, 00:13:34.890 "num_base_bdevs_operational": 4, 00:13:34.890 "base_bdevs_list": [ 00:13:34.890 { 00:13:34.890 "name": "BaseBdev1", 00:13:34.890 "uuid": "baf4039b-26ed-428e-b26d-965508263339", 00:13:34.890 "is_configured": true, 00:13:34.890 "data_offset": 2048, 00:13:34.890 "data_size": 63488 00:13:34.890 }, 00:13:34.890 { 00:13:34.890 "name": null, 00:13:34.890 "uuid": "dfb6b71a-34da-48ce-982c-4b02f11bcd3b", 00:13:34.891 "is_configured": false, 00:13:34.891 "data_offset": 0, 00:13:34.891 "data_size": 63488 00:13:34.891 }, 00:13:34.891 { 00:13:34.891 "name": null, 00:13:34.891 "uuid": "94e50c91-0251-4cd0-9b73-4b5f61adb1e0", 00:13:34.891 "is_configured": false, 00:13:34.891 "data_offset": 0, 00:13:34.891 "data_size": 63488 00:13:34.891 }, 00:13:34.891 { 00:13:34.891 "name": "BaseBdev4", 00:13:34.891 "uuid": "abcc984a-801c-4b08-926d-54b163b8dba8", 00:13:34.891 "is_configured": true, 00:13:34.891 "data_offset": 2048, 00:13:34.891 "data_size": 63488 00:13:34.891 } 00:13:34.891 ] 00:13:34.891 }' 00:13:34.891 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.891 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.458 [2024-11-27 08:45:32.162314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.458 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.718 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.718 "name": "Existed_Raid", 00:13:35.718 "uuid": "4081b123-26a2-4cb0-b757-94e2fa08af47", 00:13:35.718 "strip_size_kb": 64, 00:13:35.718 "state": "configuring", 00:13:35.718 "raid_level": "raid0", 00:13:35.718 "superblock": true, 00:13:35.718 "num_base_bdevs": 4, 00:13:35.718 "num_base_bdevs_discovered": 3, 00:13:35.718 "num_base_bdevs_operational": 4, 00:13:35.718 "base_bdevs_list": [ 00:13:35.718 { 00:13:35.718 "name": "BaseBdev1", 00:13:35.718 "uuid": "baf4039b-26ed-428e-b26d-965508263339", 00:13:35.718 "is_configured": true, 00:13:35.718 "data_offset": 2048, 00:13:35.718 "data_size": 63488 00:13:35.718 }, 00:13:35.718 { 00:13:35.718 "name": null, 00:13:35.718 "uuid": "dfb6b71a-34da-48ce-982c-4b02f11bcd3b", 00:13:35.718 "is_configured": false, 00:13:35.718 "data_offset": 0, 00:13:35.718 "data_size": 63488 00:13:35.718 }, 00:13:35.718 { 00:13:35.718 "name": "BaseBdev3", 00:13:35.718 "uuid": "94e50c91-0251-4cd0-9b73-4b5f61adb1e0", 00:13:35.718 "is_configured": true, 00:13:35.718 "data_offset": 2048, 00:13:35.718 "data_size": 63488 00:13:35.718 }, 00:13:35.718 { 00:13:35.718 "name": "BaseBdev4", 00:13:35.718 "uuid": "abcc984a-801c-4b08-926d-54b163b8dba8", 00:13:35.718 "is_configured": true, 00:13:35.718 "data_offset": 2048, 00:13:35.718 "data_size": 63488 00:13:35.718 } 00:13:35.718 ] 00:13:35.718 }' 00:13:35.718 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.718 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.977 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.977 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:35.977 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.977 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.977 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.235 [2024-11-27 08:45:32.750535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.235 "name": "Existed_Raid", 00:13:36.235 "uuid": "4081b123-26a2-4cb0-b757-94e2fa08af47", 00:13:36.235 "strip_size_kb": 64, 00:13:36.235 "state": "configuring", 00:13:36.235 "raid_level": "raid0", 00:13:36.235 "superblock": true, 00:13:36.235 "num_base_bdevs": 4, 00:13:36.235 "num_base_bdevs_discovered": 2, 00:13:36.235 "num_base_bdevs_operational": 4, 00:13:36.235 "base_bdevs_list": [ 00:13:36.235 { 00:13:36.235 "name": null, 00:13:36.235 "uuid": "baf4039b-26ed-428e-b26d-965508263339", 00:13:36.235 "is_configured": false, 00:13:36.235 "data_offset": 0, 00:13:36.235 "data_size": 63488 00:13:36.235 }, 00:13:36.235 { 00:13:36.235 "name": null, 00:13:36.235 "uuid": "dfb6b71a-34da-48ce-982c-4b02f11bcd3b", 00:13:36.235 "is_configured": false, 00:13:36.235 "data_offset": 0, 00:13:36.235 "data_size": 63488 00:13:36.235 }, 00:13:36.235 { 00:13:36.235 "name": "BaseBdev3", 00:13:36.235 "uuid": "94e50c91-0251-4cd0-9b73-4b5f61adb1e0", 00:13:36.235 "is_configured": true, 00:13:36.235 "data_offset": 2048, 00:13:36.235 "data_size": 63488 00:13:36.235 }, 00:13:36.235 { 00:13:36.235 "name": "BaseBdev4", 00:13:36.235 "uuid": "abcc984a-801c-4b08-926d-54b163b8dba8", 00:13:36.235 "is_configured": true, 00:13:36.235 "data_offset": 2048, 00:13:36.235 "data_size": 63488 00:13:36.235 } 00:13:36.235 ] 00:13:36.235 }' 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.235 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.803 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.803 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.803 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.803 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:36.803 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.803 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:36.803 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:36.803 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.803 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.803 [2024-11-27 08:45:33.432309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:36.803 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.803 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.804 "name": "Existed_Raid", 00:13:36.804 "uuid": "4081b123-26a2-4cb0-b757-94e2fa08af47", 00:13:36.804 "strip_size_kb": 64, 00:13:36.804 "state": "configuring", 00:13:36.804 "raid_level": "raid0", 00:13:36.804 "superblock": true, 00:13:36.804 "num_base_bdevs": 4, 00:13:36.804 "num_base_bdevs_discovered": 3, 00:13:36.804 "num_base_bdevs_operational": 4, 00:13:36.804 "base_bdevs_list": [ 00:13:36.804 { 00:13:36.804 "name": null, 00:13:36.804 "uuid": "baf4039b-26ed-428e-b26d-965508263339", 00:13:36.804 "is_configured": false, 00:13:36.804 "data_offset": 0, 00:13:36.804 "data_size": 63488 00:13:36.804 }, 00:13:36.804 { 00:13:36.804 "name": "BaseBdev2", 00:13:36.804 "uuid": "dfb6b71a-34da-48ce-982c-4b02f11bcd3b", 00:13:36.804 "is_configured": true, 00:13:36.804 "data_offset": 2048, 00:13:36.804 "data_size": 63488 00:13:36.804 }, 00:13:36.804 { 00:13:36.804 "name": "BaseBdev3", 00:13:36.804 "uuid": "94e50c91-0251-4cd0-9b73-4b5f61adb1e0", 00:13:36.804 "is_configured": true, 00:13:36.804 "data_offset": 2048, 00:13:36.804 "data_size": 63488 00:13:36.804 }, 00:13:36.804 { 00:13:36.804 "name": "BaseBdev4", 00:13:36.804 "uuid": "abcc984a-801c-4b08-926d-54b163b8dba8", 00:13:36.804 "is_configured": true, 00:13:36.804 "data_offset": 2048, 00:13:36.804 "data_size": 63488 00:13:36.804 } 00:13:36.804 ] 00:13:36.804 }' 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.804 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.372 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.372 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:37.372 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.372 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.372 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.372 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:37.372 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:37.372 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.372 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.372 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u baf4039b-26ed-428e-b26d-965508263339 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.372 [2024-11-27 08:45:34.085896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:37.372 [2024-11-27 08:45:34.086241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:37.372 [2024-11-27 08:45:34.086260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:37.372 NewBaseBdev 00:13:37.372 [2024-11-27 08:45:34.086646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:37.372 [2024-11-27 08:45:34.086850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:37.372 [2024-11-27 08:45:34.086882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:37.372 [2024-11-27 08:45:34.087052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.372 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.372 [ 00:13:37.372 { 00:13:37.372 "name": "NewBaseBdev", 00:13:37.372 "aliases": [ 00:13:37.372 "baf4039b-26ed-428e-b26d-965508263339" 00:13:37.372 ], 00:13:37.372 "product_name": "Malloc disk", 00:13:37.372 "block_size": 512, 00:13:37.372 "num_blocks": 65536, 00:13:37.372 "uuid": "baf4039b-26ed-428e-b26d-965508263339", 00:13:37.372 "assigned_rate_limits": { 00:13:37.372 "rw_ios_per_sec": 0, 00:13:37.372 "rw_mbytes_per_sec": 0, 00:13:37.372 "r_mbytes_per_sec": 0, 00:13:37.372 "w_mbytes_per_sec": 0 00:13:37.372 }, 00:13:37.372 "claimed": true, 00:13:37.372 "claim_type": "exclusive_write", 00:13:37.372 "zoned": false, 00:13:37.372 "supported_io_types": { 00:13:37.372 "read": true, 00:13:37.372 "write": true, 00:13:37.372 "unmap": true, 00:13:37.372 "flush": true, 00:13:37.372 "reset": true, 00:13:37.372 "nvme_admin": false, 00:13:37.372 "nvme_io": false, 00:13:37.372 "nvme_io_md": false, 00:13:37.372 "write_zeroes": true, 00:13:37.372 "zcopy": true, 00:13:37.373 "get_zone_info": false, 00:13:37.373 "zone_management": false, 00:13:37.373 "zone_append": false, 00:13:37.373 "compare": false, 00:13:37.373 "compare_and_write": false, 00:13:37.373 "abort": true, 00:13:37.373 "seek_hole": false, 00:13:37.373 "seek_data": false, 00:13:37.373 "copy": true, 00:13:37.373 "nvme_iov_md": false 00:13:37.373 }, 00:13:37.373 "memory_domains": [ 00:13:37.373 { 00:13:37.373 "dma_device_id": "system", 00:13:37.373 "dma_device_type": 1 00:13:37.373 }, 00:13:37.373 { 00:13:37.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.373 "dma_device_type": 2 00:13:37.373 } 00:13:37.373 ], 00:13:37.373 "driver_specific": {} 00:13:37.373 } 00:13:37.373 ] 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.373 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.631 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.631 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.631 "name": "Existed_Raid", 00:13:37.632 "uuid": "4081b123-26a2-4cb0-b757-94e2fa08af47", 00:13:37.632 "strip_size_kb": 64, 00:13:37.632 "state": "online", 00:13:37.632 "raid_level": "raid0", 00:13:37.632 "superblock": true, 00:13:37.632 "num_base_bdevs": 4, 00:13:37.632 "num_base_bdevs_discovered": 4, 00:13:37.632 "num_base_bdevs_operational": 4, 00:13:37.632 "base_bdevs_list": [ 00:13:37.632 { 00:13:37.632 "name": "NewBaseBdev", 00:13:37.632 "uuid": "baf4039b-26ed-428e-b26d-965508263339", 00:13:37.632 "is_configured": true, 00:13:37.632 "data_offset": 2048, 00:13:37.632 "data_size": 63488 00:13:37.632 }, 00:13:37.632 { 00:13:37.632 "name": "BaseBdev2", 00:13:37.632 "uuid": "dfb6b71a-34da-48ce-982c-4b02f11bcd3b", 00:13:37.632 "is_configured": true, 00:13:37.632 "data_offset": 2048, 00:13:37.632 "data_size": 63488 00:13:37.632 }, 00:13:37.632 { 00:13:37.632 "name": "BaseBdev3", 00:13:37.632 "uuid": "94e50c91-0251-4cd0-9b73-4b5f61adb1e0", 00:13:37.632 "is_configured": true, 00:13:37.632 "data_offset": 2048, 00:13:37.632 "data_size": 63488 00:13:37.632 }, 00:13:37.632 { 00:13:37.632 "name": "BaseBdev4", 00:13:37.632 "uuid": "abcc984a-801c-4b08-926d-54b163b8dba8", 00:13:37.632 "is_configured": true, 00:13:37.632 "data_offset": 2048, 00:13:37.632 "data_size": 63488 00:13:37.632 } 00:13:37.632 ] 00:13:37.632 }' 00:13:37.632 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.632 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.890 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:37.890 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:37.890 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:37.890 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:37.890 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:37.890 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:37.890 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:37.890 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:37.890 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.890 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.890 [2024-11-27 08:45:34.622597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.890 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.151 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:38.151 "name": "Existed_Raid", 00:13:38.151 "aliases": [ 00:13:38.151 "4081b123-26a2-4cb0-b757-94e2fa08af47" 00:13:38.151 ], 00:13:38.151 "product_name": "Raid Volume", 00:13:38.151 "block_size": 512, 00:13:38.151 "num_blocks": 253952, 00:13:38.151 "uuid": "4081b123-26a2-4cb0-b757-94e2fa08af47", 00:13:38.152 "assigned_rate_limits": { 00:13:38.152 "rw_ios_per_sec": 0, 00:13:38.152 "rw_mbytes_per_sec": 0, 00:13:38.152 "r_mbytes_per_sec": 0, 00:13:38.152 "w_mbytes_per_sec": 0 00:13:38.152 }, 00:13:38.152 "claimed": false, 00:13:38.152 "zoned": false, 00:13:38.152 "supported_io_types": { 00:13:38.152 "read": true, 00:13:38.152 "write": true, 00:13:38.152 "unmap": true, 00:13:38.152 "flush": true, 00:13:38.152 "reset": true, 00:13:38.152 "nvme_admin": false, 00:13:38.152 "nvme_io": false, 00:13:38.152 "nvme_io_md": false, 00:13:38.152 "write_zeroes": true, 00:13:38.152 "zcopy": false, 00:13:38.152 "get_zone_info": false, 00:13:38.152 "zone_management": false, 00:13:38.152 "zone_append": false, 00:13:38.152 "compare": false, 00:13:38.152 "compare_and_write": false, 00:13:38.152 "abort": false, 00:13:38.152 "seek_hole": false, 00:13:38.152 "seek_data": false, 00:13:38.152 "copy": false, 00:13:38.152 "nvme_iov_md": false 00:13:38.152 }, 00:13:38.152 "memory_domains": [ 00:13:38.152 { 00:13:38.152 "dma_device_id": "system", 00:13:38.152 "dma_device_type": 1 00:13:38.152 }, 00:13:38.152 { 00:13:38.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.152 "dma_device_type": 2 00:13:38.152 }, 00:13:38.152 { 00:13:38.152 "dma_device_id": "system", 00:13:38.152 "dma_device_type": 1 00:13:38.152 }, 00:13:38.152 { 00:13:38.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.152 "dma_device_type": 2 00:13:38.152 }, 00:13:38.152 { 00:13:38.152 "dma_device_id": "system", 00:13:38.152 "dma_device_type": 1 00:13:38.152 }, 00:13:38.152 { 00:13:38.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.152 "dma_device_type": 2 00:13:38.152 }, 00:13:38.152 { 00:13:38.152 "dma_device_id": "system", 00:13:38.152 "dma_device_type": 1 00:13:38.152 }, 00:13:38.152 { 00:13:38.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.152 "dma_device_type": 2 00:13:38.152 } 00:13:38.152 ], 00:13:38.152 "driver_specific": { 00:13:38.152 "raid": { 00:13:38.152 "uuid": "4081b123-26a2-4cb0-b757-94e2fa08af47", 00:13:38.152 "strip_size_kb": 64, 00:13:38.152 "state": "online", 00:13:38.152 "raid_level": "raid0", 00:13:38.152 "superblock": true, 00:13:38.152 "num_base_bdevs": 4, 00:13:38.152 "num_base_bdevs_discovered": 4, 00:13:38.152 "num_base_bdevs_operational": 4, 00:13:38.152 "base_bdevs_list": [ 00:13:38.152 { 00:13:38.152 "name": "NewBaseBdev", 00:13:38.152 "uuid": "baf4039b-26ed-428e-b26d-965508263339", 00:13:38.152 "is_configured": true, 00:13:38.152 "data_offset": 2048, 00:13:38.152 "data_size": 63488 00:13:38.152 }, 00:13:38.152 { 00:13:38.152 "name": "BaseBdev2", 00:13:38.152 "uuid": "dfb6b71a-34da-48ce-982c-4b02f11bcd3b", 00:13:38.152 "is_configured": true, 00:13:38.152 "data_offset": 2048, 00:13:38.152 "data_size": 63488 00:13:38.152 }, 00:13:38.152 { 00:13:38.152 "name": "BaseBdev3", 00:13:38.152 "uuid": "94e50c91-0251-4cd0-9b73-4b5f61adb1e0", 00:13:38.152 "is_configured": true, 00:13:38.152 "data_offset": 2048, 00:13:38.152 "data_size": 63488 00:13:38.152 }, 00:13:38.152 { 00:13:38.152 "name": "BaseBdev4", 00:13:38.152 "uuid": "abcc984a-801c-4b08-926d-54b163b8dba8", 00:13:38.152 "is_configured": true, 00:13:38.152 "data_offset": 2048, 00:13:38.152 "data_size": 63488 00:13:38.152 } 00:13:38.152 ] 00:13:38.152 } 00:13:38.152 } 00:13:38.152 }' 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:38.152 BaseBdev2 00:13:38.152 BaseBdev3 00:13:38.152 BaseBdev4' 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.152 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.411 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:38.411 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.411 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.411 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.411 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.411 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.411 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.411 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.411 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.412 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:38.412 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.412 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.412 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.412 [2024-11-27 08:45:35.018166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:38.412 [2024-11-27 08:45:35.018364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:38.412 [2024-11-27 08:45:35.018507] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.412 [2024-11-27 08:45:35.018614] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.412 [2024-11-27 08:45:35.018633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70295 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' -z 70295 ']' 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # kill -0 70295 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # uname 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 70295 00:13:38.412 killing process with pid 70295 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # echo 'killing process with pid 70295' 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # kill 70295 00:13:38.412 [2024-11-27 08:45:35.059226] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:38.412 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@975 -- # wait 70295 00:13:38.980 [2024-11-27 08:45:35.435866] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.917 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:39.917 00:13:39.917 real 0m13.195s 00:13:39.917 user 0m21.749s 00:13:39.917 sys 0m1.897s 00:13:39.917 ************************************ 00:13:39.917 END TEST raid_state_function_test_sb 00:13:39.917 ************************************ 00:13:39.917 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # xtrace_disable 00:13:39.917 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.917 08:45:36 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:13:39.917 08:45:36 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:13:39.917 08:45:36 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:13:39.917 08:45:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.917 ************************************ 00:13:39.917 START TEST raid_superblock_test 00:13:39.918 ************************************ 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # raid_superblock_test raid0 4 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70980 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70980 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # '[' -z 70980 ']' 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:13:39.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:13:39.918 08:45:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.180 [2024-11-27 08:45:36.718079] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:13:40.180 [2024-11-27 08:45:36.718296] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70980 ] 00:13:40.180 [2024-11-27 08:45:36.908469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.439 [2024-11-27 08:45:37.054887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.698 [2024-11-27 08:45:37.278126] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.698 [2024-11-27 08:45:37.278520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@865 -- # return 0 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.265 malloc1 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.265 [2024-11-27 08:45:37.813174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:41.265 [2024-11-27 08:45:37.813262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.265 [2024-11-27 08:45:37.813298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:41.265 [2024-11-27 08:45:37.813315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.265 [2024-11-27 08:45:37.816366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.265 [2024-11-27 08:45:37.816421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:41.265 pt1 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.265 malloc2 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.265 [2024-11-27 08:45:37.873005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:41.265 [2024-11-27 08:45:37.873088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.265 [2024-11-27 08:45:37.873122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:41.265 [2024-11-27 08:45:37.873138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.265 [2024-11-27 08:45:37.876118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.265 [2024-11-27 08:45:37.876170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:41.265 pt2 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.265 malloc3 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.265 [2024-11-27 08:45:37.949547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:41.265 [2024-11-27 08:45:37.949757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.265 [2024-11-27 08:45:37.949838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:41.265 [2024-11-27 08:45:37.950026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.265 [2024-11-27 08:45:37.953000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.265 [2024-11-27 08:45:37.953161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:41.265 pt3 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.265 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.265 malloc4 00:13:41.265 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.265 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:41.265 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.265 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.265 [2024-11-27 08:45:38.009242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:41.265 [2024-11-27 08:45:38.009318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.265 [2024-11-27 08:45:38.009375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:41.265 [2024-11-27 08:45:38.009394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.265 [2024-11-27 08:45:38.012380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.265 [2024-11-27 08:45:38.012426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:41.265 pt4 00:13:41.265 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.265 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:41.265 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:41.265 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:41.265 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.266 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.524 [2024-11-27 08:45:38.021370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:41.524 [2024-11-27 08:45:38.023971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:41.524 [2024-11-27 08:45:38.024217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:41.524 [2024-11-27 08:45:38.024331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:41.524 [2024-11-27 08:45:38.024626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:41.524 [2024-11-27 08:45:38.024646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:41.524 [2024-11-27 08:45:38.024998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:41.524 [2024-11-27 08:45:38.025248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:41.524 [2024-11-27 08:45:38.025271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:41.524 [2024-11-27 08:45:38.025537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.524 "name": "raid_bdev1", 00:13:41.524 "uuid": "83b7d24e-855a-4a2b-8be4-b25b33ea60f6", 00:13:41.524 "strip_size_kb": 64, 00:13:41.524 "state": "online", 00:13:41.524 "raid_level": "raid0", 00:13:41.524 "superblock": true, 00:13:41.524 "num_base_bdevs": 4, 00:13:41.524 "num_base_bdevs_discovered": 4, 00:13:41.524 "num_base_bdevs_operational": 4, 00:13:41.524 "base_bdevs_list": [ 00:13:41.524 { 00:13:41.524 "name": "pt1", 00:13:41.524 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.524 "is_configured": true, 00:13:41.524 "data_offset": 2048, 00:13:41.524 "data_size": 63488 00:13:41.524 }, 00:13:41.524 { 00:13:41.524 "name": "pt2", 00:13:41.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.524 "is_configured": true, 00:13:41.524 "data_offset": 2048, 00:13:41.524 "data_size": 63488 00:13:41.524 }, 00:13:41.524 { 00:13:41.524 "name": "pt3", 00:13:41.524 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.524 "is_configured": true, 00:13:41.524 "data_offset": 2048, 00:13:41.524 "data_size": 63488 00:13:41.524 }, 00:13:41.524 { 00:13:41.524 "name": "pt4", 00:13:41.524 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:41.524 "is_configured": true, 00:13:41.524 "data_offset": 2048, 00:13:41.524 "data_size": 63488 00:13:41.524 } 00:13:41.524 ] 00:13:41.524 }' 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.524 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.090 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:42.090 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:42.090 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:42.090 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:42.090 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:42.090 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:42.090 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:42.090 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.090 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:42.090 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.090 [2024-11-27 08:45:38.558107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.090 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.090 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:42.090 "name": "raid_bdev1", 00:13:42.090 "aliases": [ 00:13:42.090 "83b7d24e-855a-4a2b-8be4-b25b33ea60f6" 00:13:42.090 ], 00:13:42.090 "product_name": "Raid Volume", 00:13:42.090 "block_size": 512, 00:13:42.090 "num_blocks": 253952, 00:13:42.090 "uuid": "83b7d24e-855a-4a2b-8be4-b25b33ea60f6", 00:13:42.090 "assigned_rate_limits": { 00:13:42.090 "rw_ios_per_sec": 0, 00:13:42.090 "rw_mbytes_per_sec": 0, 00:13:42.090 "r_mbytes_per_sec": 0, 00:13:42.090 "w_mbytes_per_sec": 0 00:13:42.090 }, 00:13:42.090 "claimed": false, 00:13:42.090 "zoned": false, 00:13:42.090 "supported_io_types": { 00:13:42.090 "read": true, 00:13:42.090 "write": true, 00:13:42.090 "unmap": true, 00:13:42.090 "flush": true, 00:13:42.090 "reset": true, 00:13:42.090 "nvme_admin": false, 00:13:42.090 "nvme_io": false, 00:13:42.090 "nvme_io_md": false, 00:13:42.090 "write_zeroes": true, 00:13:42.090 "zcopy": false, 00:13:42.090 "get_zone_info": false, 00:13:42.090 "zone_management": false, 00:13:42.090 "zone_append": false, 00:13:42.090 "compare": false, 00:13:42.090 "compare_and_write": false, 00:13:42.090 "abort": false, 00:13:42.090 "seek_hole": false, 00:13:42.090 "seek_data": false, 00:13:42.090 "copy": false, 00:13:42.090 "nvme_iov_md": false 00:13:42.090 }, 00:13:42.090 "memory_domains": [ 00:13:42.090 { 00:13:42.090 "dma_device_id": "system", 00:13:42.090 "dma_device_type": 1 00:13:42.090 }, 00:13:42.090 { 00:13:42.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.090 "dma_device_type": 2 00:13:42.090 }, 00:13:42.090 { 00:13:42.090 "dma_device_id": "system", 00:13:42.090 "dma_device_type": 1 00:13:42.090 }, 00:13:42.090 { 00:13:42.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.090 "dma_device_type": 2 00:13:42.090 }, 00:13:42.090 { 00:13:42.090 "dma_device_id": "system", 00:13:42.090 "dma_device_type": 1 00:13:42.090 }, 00:13:42.090 { 00:13:42.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.090 "dma_device_type": 2 00:13:42.090 }, 00:13:42.090 { 00:13:42.090 "dma_device_id": "system", 00:13:42.090 "dma_device_type": 1 00:13:42.090 }, 00:13:42.090 { 00:13:42.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.090 "dma_device_type": 2 00:13:42.090 } 00:13:42.090 ], 00:13:42.090 "driver_specific": { 00:13:42.090 "raid": { 00:13:42.090 "uuid": "83b7d24e-855a-4a2b-8be4-b25b33ea60f6", 00:13:42.090 "strip_size_kb": 64, 00:13:42.090 "state": "online", 00:13:42.090 "raid_level": "raid0", 00:13:42.090 "superblock": true, 00:13:42.090 "num_base_bdevs": 4, 00:13:42.090 "num_base_bdevs_discovered": 4, 00:13:42.090 "num_base_bdevs_operational": 4, 00:13:42.090 "base_bdevs_list": [ 00:13:42.090 { 00:13:42.090 "name": "pt1", 00:13:42.090 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:42.090 "is_configured": true, 00:13:42.090 "data_offset": 2048, 00:13:42.090 "data_size": 63488 00:13:42.090 }, 00:13:42.090 { 00:13:42.090 "name": "pt2", 00:13:42.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.090 "is_configured": true, 00:13:42.090 "data_offset": 2048, 00:13:42.090 "data_size": 63488 00:13:42.090 }, 00:13:42.090 { 00:13:42.090 "name": "pt3", 00:13:42.090 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.091 "is_configured": true, 00:13:42.091 "data_offset": 2048, 00:13:42.091 "data_size": 63488 00:13:42.091 }, 00:13:42.091 { 00:13:42.091 "name": "pt4", 00:13:42.091 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:42.091 "is_configured": true, 00:13:42.091 "data_offset": 2048, 00:13:42.091 "data_size": 63488 00:13:42.091 } 00:13:42.091 ] 00:13:42.091 } 00:13:42.091 } 00:13:42.091 }' 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:42.091 pt2 00:13:42.091 pt3 00:13:42.091 pt4' 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.091 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.349 [2024-11-27 08:45:38.934141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=83b7d24e-855a-4a2b-8be4-b25b33ea60f6 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 83b7d24e-855a-4a2b-8be4-b25b33ea60f6 ']' 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.349 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.349 [2024-11-27 08:45:38.997794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.349 [2024-11-27 08:45:38.997956] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.349 [2024-11-27 08:45:38.998112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.349 [2024-11-27 08:45:38.998230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.349 [2024-11-27 08:45:38.998266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.349 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.350 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:42.350 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:42.350 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.350 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.350 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.350 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:42.350 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.350 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.350 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:42.607 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.607 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:42.607 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:42.607 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:42.607 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:42.607 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:42.607 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.607 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:42.607 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.607 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:42.607 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.607 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.607 [2024-11-27 08:45:39.153833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:42.607 [2024-11-27 08:45:39.156781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:42.607 [2024-11-27 08:45:39.156968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:42.607 [2024-11-27 08:45:39.157147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:42.607 [2024-11-27 08:45:39.157375] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:42.607 [2024-11-27 08:45:39.157608] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:42.607 [2024-11-27 08:45:39.157833] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:42.607 [2024-11-27 08:45:39.158007] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:42.607 [2024-11-27 08:45:39.158257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.607 [2024-11-27 08:45:39.158427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:42.607 request: 00:13:42.607 { 00:13:42.607 "name": "raid_bdev1", 00:13:42.607 "raid_level": "raid0", 00:13:42.607 "base_bdevs": [ 00:13:42.607 "malloc1", 00:13:42.607 "malloc2", 00:13:42.607 "malloc3", 00:13:42.607 "malloc4" 00:13:42.607 ], 00:13:42.607 "strip_size_kb": 64, 00:13:42.607 "superblock": false, 00:13:42.608 "method": "bdev_raid_create", 00:13:42.608 "req_id": 1 00:13:42.608 } 00:13:42.608 Got JSON-RPC error response 00:13:42.608 response: 00:13:42.608 { 00:13:42.608 "code": -17, 00:13:42.608 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:42.608 } 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.608 [2024-11-27 08:45:39.226872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:42.608 [2024-11-27 08:45:39.226966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.608 [2024-11-27 08:45:39.226998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:42.608 [2024-11-27 08:45:39.227018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.608 [2024-11-27 08:45:39.230156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.608 [2024-11-27 08:45:39.230225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:42.608 [2024-11-27 08:45:39.230367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:42.608 [2024-11-27 08:45:39.230461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:42.608 pt1 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.608 "name": "raid_bdev1", 00:13:42.608 "uuid": "83b7d24e-855a-4a2b-8be4-b25b33ea60f6", 00:13:42.608 "strip_size_kb": 64, 00:13:42.608 "state": "configuring", 00:13:42.608 "raid_level": "raid0", 00:13:42.608 "superblock": true, 00:13:42.608 "num_base_bdevs": 4, 00:13:42.608 "num_base_bdevs_discovered": 1, 00:13:42.608 "num_base_bdevs_operational": 4, 00:13:42.608 "base_bdevs_list": [ 00:13:42.608 { 00:13:42.608 "name": "pt1", 00:13:42.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:42.608 "is_configured": true, 00:13:42.608 "data_offset": 2048, 00:13:42.608 "data_size": 63488 00:13:42.608 }, 00:13:42.608 { 00:13:42.608 "name": null, 00:13:42.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.608 "is_configured": false, 00:13:42.608 "data_offset": 2048, 00:13:42.608 "data_size": 63488 00:13:42.608 }, 00:13:42.608 { 00:13:42.608 "name": null, 00:13:42.608 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.608 "is_configured": false, 00:13:42.608 "data_offset": 2048, 00:13:42.608 "data_size": 63488 00:13:42.608 }, 00:13:42.608 { 00:13:42.608 "name": null, 00:13:42.608 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:42.608 "is_configured": false, 00:13:42.608 "data_offset": 2048, 00:13:42.608 "data_size": 63488 00:13:42.608 } 00:13:42.608 ] 00:13:42.608 }' 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.608 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.174 [2024-11-27 08:45:39.751056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:43.174 [2024-11-27 08:45:39.751356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.174 [2024-11-27 08:45:39.751437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:43.174 [2024-11-27 08:45:39.751632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.174 [2024-11-27 08:45:39.752361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.174 [2024-11-27 08:45:39.752529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:43.174 [2024-11-27 08:45:39.752671] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:43.174 [2024-11-27 08:45:39.752715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:43.174 pt2 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.174 [2024-11-27 08:45:39.759052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.174 "name": "raid_bdev1", 00:13:43.174 "uuid": "83b7d24e-855a-4a2b-8be4-b25b33ea60f6", 00:13:43.174 "strip_size_kb": 64, 00:13:43.174 "state": "configuring", 00:13:43.174 "raid_level": "raid0", 00:13:43.174 "superblock": true, 00:13:43.174 "num_base_bdevs": 4, 00:13:43.174 "num_base_bdevs_discovered": 1, 00:13:43.174 "num_base_bdevs_operational": 4, 00:13:43.174 "base_bdevs_list": [ 00:13:43.174 { 00:13:43.174 "name": "pt1", 00:13:43.174 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:43.174 "is_configured": true, 00:13:43.174 "data_offset": 2048, 00:13:43.174 "data_size": 63488 00:13:43.174 }, 00:13:43.174 { 00:13:43.174 "name": null, 00:13:43.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.174 "is_configured": false, 00:13:43.174 "data_offset": 0, 00:13:43.174 "data_size": 63488 00:13:43.174 }, 00:13:43.174 { 00:13:43.174 "name": null, 00:13:43.174 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.174 "is_configured": false, 00:13:43.174 "data_offset": 2048, 00:13:43.174 "data_size": 63488 00:13:43.174 }, 00:13:43.174 { 00:13:43.174 "name": null, 00:13:43.174 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:43.174 "is_configured": false, 00:13:43.174 "data_offset": 2048, 00:13:43.174 "data_size": 63488 00:13:43.174 } 00:13:43.174 ] 00:13:43.174 }' 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.174 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.775 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:43.775 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:43.775 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:43.775 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.775 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.775 [2024-11-27 08:45:40.307227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:43.775 [2024-11-27 08:45:40.307324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.776 [2024-11-27 08:45:40.307375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:43.776 [2024-11-27 08:45:40.307393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.776 [2024-11-27 08:45:40.308042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.776 [2024-11-27 08:45:40.308075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:43.776 [2024-11-27 08:45:40.308199] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:43.776 [2024-11-27 08:45:40.308241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:43.776 pt2 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.776 [2024-11-27 08:45:40.315155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:43.776 [2024-11-27 08:45:40.315216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.776 [2024-11-27 08:45:40.315253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:43.776 [2024-11-27 08:45:40.315271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.776 [2024-11-27 08:45:40.315761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.776 [2024-11-27 08:45:40.315805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:43.776 [2024-11-27 08:45:40.315889] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:43.776 [2024-11-27 08:45:40.315918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:43.776 pt3 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.776 [2024-11-27 08:45:40.323129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:43.776 [2024-11-27 08:45:40.323371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.776 [2024-11-27 08:45:40.323417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:43.776 [2024-11-27 08:45:40.323432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.776 [2024-11-27 08:45:40.323900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.776 [2024-11-27 08:45:40.323936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:43.776 [2024-11-27 08:45:40.324022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:43.776 [2024-11-27 08:45:40.324051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:43.776 [2024-11-27 08:45:40.324223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:43.776 [2024-11-27 08:45:40.324240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:43.776 [2024-11-27 08:45:40.324588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:43.776 [2024-11-27 08:45:40.324792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:43.776 [2024-11-27 08:45:40.324815] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:43.776 [2024-11-27 08:45:40.324988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.776 pt4 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.776 "name": "raid_bdev1", 00:13:43.776 "uuid": "83b7d24e-855a-4a2b-8be4-b25b33ea60f6", 00:13:43.776 "strip_size_kb": 64, 00:13:43.776 "state": "online", 00:13:43.776 "raid_level": "raid0", 00:13:43.776 "superblock": true, 00:13:43.776 "num_base_bdevs": 4, 00:13:43.776 "num_base_bdevs_discovered": 4, 00:13:43.776 "num_base_bdevs_operational": 4, 00:13:43.776 "base_bdevs_list": [ 00:13:43.776 { 00:13:43.776 "name": "pt1", 00:13:43.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:43.776 "is_configured": true, 00:13:43.776 "data_offset": 2048, 00:13:43.776 "data_size": 63488 00:13:43.776 }, 00:13:43.776 { 00:13:43.776 "name": "pt2", 00:13:43.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.776 "is_configured": true, 00:13:43.776 "data_offset": 2048, 00:13:43.776 "data_size": 63488 00:13:43.776 }, 00:13:43.776 { 00:13:43.776 "name": "pt3", 00:13:43.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.776 "is_configured": true, 00:13:43.776 "data_offset": 2048, 00:13:43.776 "data_size": 63488 00:13:43.776 }, 00:13:43.776 { 00:13:43.776 "name": "pt4", 00:13:43.776 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:43.776 "is_configured": true, 00:13:43.776 "data_offset": 2048, 00:13:43.776 "data_size": 63488 00:13:43.776 } 00:13:43.776 ] 00:13:43.776 }' 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.776 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.366 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:44.366 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:44.366 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:44.366 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:44.366 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:44.366 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:44.366 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:44.366 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:44.366 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.366 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.366 [2024-11-27 08:45:40.859814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.366 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.366 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:44.366 "name": "raid_bdev1", 00:13:44.366 "aliases": [ 00:13:44.366 "83b7d24e-855a-4a2b-8be4-b25b33ea60f6" 00:13:44.366 ], 00:13:44.366 "product_name": "Raid Volume", 00:13:44.366 "block_size": 512, 00:13:44.366 "num_blocks": 253952, 00:13:44.366 "uuid": "83b7d24e-855a-4a2b-8be4-b25b33ea60f6", 00:13:44.366 "assigned_rate_limits": { 00:13:44.366 "rw_ios_per_sec": 0, 00:13:44.366 "rw_mbytes_per_sec": 0, 00:13:44.366 "r_mbytes_per_sec": 0, 00:13:44.366 "w_mbytes_per_sec": 0 00:13:44.366 }, 00:13:44.366 "claimed": false, 00:13:44.366 "zoned": false, 00:13:44.366 "supported_io_types": { 00:13:44.366 "read": true, 00:13:44.366 "write": true, 00:13:44.366 "unmap": true, 00:13:44.366 "flush": true, 00:13:44.366 "reset": true, 00:13:44.366 "nvme_admin": false, 00:13:44.366 "nvme_io": false, 00:13:44.366 "nvme_io_md": false, 00:13:44.366 "write_zeroes": true, 00:13:44.367 "zcopy": false, 00:13:44.367 "get_zone_info": false, 00:13:44.367 "zone_management": false, 00:13:44.367 "zone_append": false, 00:13:44.367 "compare": false, 00:13:44.367 "compare_and_write": false, 00:13:44.367 "abort": false, 00:13:44.367 "seek_hole": false, 00:13:44.367 "seek_data": false, 00:13:44.367 "copy": false, 00:13:44.367 "nvme_iov_md": false 00:13:44.367 }, 00:13:44.367 "memory_domains": [ 00:13:44.367 { 00:13:44.367 "dma_device_id": "system", 00:13:44.367 "dma_device_type": 1 00:13:44.367 }, 00:13:44.367 { 00:13:44.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.367 "dma_device_type": 2 00:13:44.367 }, 00:13:44.367 { 00:13:44.367 "dma_device_id": "system", 00:13:44.367 "dma_device_type": 1 00:13:44.367 }, 00:13:44.367 { 00:13:44.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.367 "dma_device_type": 2 00:13:44.367 }, 00:13:44.367 { 00:13:44.367 "dma_device_id": "system", 00:13:44.367 "dma_device_type": 1 00:13:44.367 }, 00:13:44.367 { 00:13:44.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.367 "dma_device_type": 2 00:13:44.367 }, 00:13:44.367 { 00:13:44.367 "dma_device_id": "system", 00:13:44.367 "dma_device_type": 1 00:13:44.367 }, 00:13:44.367 { 00:13:44.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.367 "dma_device_type": 2 00:13:44.367 } 00:13:44.367 ], 00:13:44.367 "driver_specific": { 00:13:44.367 "raid": { 00:13:44.367 "uuid": "83b7d24e-855a-4a2b-8be4-b25b33ea60f6", 00:13:44.367 "strip_size_kb": 64, 00:13:44.367 "state": "online", 00:13:44.367 "raid_level": "raid0", 00:13:44.367 "superblock": true, 00:13:44.367 "num_base_bdevs": 4, 00:13:44.367 "num_base_bdevs_discovered": 4, 00:13:44.367 "num_base_bdevs_operational": 4, 00:13:44.367 "base_bdevs_list": [ 00:13:44.367 { 00:13:44.367 "name": "pt1", 00:13:44.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:44.367 "is_configured": true, 00:13:44.367 "data_offset": 2048, 00:13:44.367 "data_size": 63488 00:13:44.367 }, 00:13:44.367 { 00:13:44.367 "name": "pt2", 00:13:44.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.367 "is_configured": true, 00:13:44.367 "data_offset": 2048, 00:13:44.367 "data_size": 63488 00:13:44.367 }, 00:13:44.367 { 00:13:44.367 "name": "pt3", 00:13:44.367 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:44.367 "is_configured": true, 00:13:44.367 "data_offset": 2048, 00:13:44.367 "data_size": 63488 00:13:44.367 }, 00:13:44.367 { 00:13:44.367 "name": "pt4", 00:13:44.367 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:44.367 "is_configured": true, 00:13:44.367 "data_offset": 2048, 00:13:44.367 "data_size": 63488 00:13:44.367 } 00:13:44.367 ] 00:13:44.367 } 00:13:44.367 } 00:13:44.367 }' 00:13:44.367 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:44.367 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:44.367 pt2 00:13:44.367 pt3 00:13:44.367 pt4' 00:13:44.367 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.367 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:44.627 [2024-11-27 08:45:41.223809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 83b7d24e-855a-4a2b-8be4-b25b33ea60f6 '!=' 83b7d24e-855a-4a2b-8be4-b25b33ea60f6 ']' 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:44.627 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:44.628 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70980 00:13:44.628 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' -z 70980 ']' 00:13:44.628 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # kill -0 70980 00:13:44.628 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # uname 00:13:44.628 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:13:44.628 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 70980 00:13:44.628 killing process with pid 70980 00:13:44.628 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:13:44.628 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:13:44.628 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 70980' 00:13:44.628 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # kill 70980 00:13:44.628 [2024-11-27 08:45:41.317572] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.628 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@975 -- # wait 70980 00:13:44.628 [2024-11-27 08:45:41.317696] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.628 [2024-11-27 08:45:41.317803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.628 [2024-11-27 08:45:41.317820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:45.195 [2024-11-27 08:45:41.692142] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:46.133 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:46.133 00:13:46.133 real 0m6.220s 00:13:46.133 user 0m9.275s 00:13:46.133 sys 0m0.957s 00:13:46.133 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:13:46.133 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.133 ************************************ 00:13:46.133 END TEST raid_superblock_test 00:13:46.133 ************************************ 00:13:46.133 08:45:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:13:46.133 08:45:42 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:13:46.133 08:45:42 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:13:46.133 08:45:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:46.133 ************************************ 00:13:46.133 START TEST raid_read_error_test 00:13:46.133 ************************************ 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test raid0 4 read 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:46.133 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:46.393 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xDgPz397lu 00:13:46.393 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71250 00:13:46.393 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71250 00:13:46.393 08:45:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:46.393 08:45:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # '[' -z 71250 ']' 00:13:46.393 08:45:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.393 08:45:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:13:46.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.393 08:45:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.393 08:45:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:13:46.393 08:45:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.393 [2024-11-27 08:45:42.992858] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:13:46.393 [2024-11-27 08:45:42.993352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71250 ] 00:13:46.653 [2024-11-27 08:45:43.171397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.653 [2024-11-27 08:45:43.323578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.911 [2024-11-27 08:45:43.549697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.911 [2024-11-27 08:45:43.549748] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.478 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:13:47.478 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@865 -- # return 0 00:13:47.478 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:47.478 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:47.478 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.478 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.478 BaseBdev1_malloc 00:13:47.478 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.478 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:47.478 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.478 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.478 true 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.479 [2024-11-27 08:45:44.082429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:47.479 [2024-11-27 08:45:44.082510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.479 [2024-11-27 08:45:44.082540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:47.479 [2024-11-27 08:45:44.082559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.479 [2024-11-27 08:45:44.085575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.479 [2024-11-27 08:45:44.085629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.479 BaseBdev1 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.479 BaseBdev2_malloc 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.479 true 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.479 [2024-11-27 08:45:44.147462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:47.479 [2024-11-27 08:45:44.147554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.479 [2024-11-27 08:45:44.147579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:47.479 [2024-11-27 08:45:44.147596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.479 [2024-11-27 08:45:44.150712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.479 [2024-11-27 08:45:44.150968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:47.479 BaseBdev2 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.479 BaseBdev3_malloc 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.479 true 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.479 [2024-11-27 08:45:44.222583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:47.479 [2024-11-27 08:45:44.222818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.479 [2024-11-27 08:45:44.222858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:47.479 [2024-11-27 08:45:44.222879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.479 [2024-11-27 08:45:44.225949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.479 [2024-11-27 08:45:44.226143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:47.479 BaseBdev3 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.479 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.738 BaseBdev4_malloc 00:13:47.738 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.738 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:47.738 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.738 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.738 true 00:13:47.738 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.738 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:47.738 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.739 [2024-11-27 08:45:44.287182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:47.739 [2024-11-27 08:45:44.287266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.739 [2024-11-27 08:45:44.287298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:47.739 [2024-11-27 08:45:44.287317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.739 [2024-11-27 08:45:44.290392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.739 [2024-11-27 08:45:44.290459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:47.739 BaseBdev4 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.739 [2024-11-27 08:45:44.295321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.739 [2024-11-27 08:45:44.298149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.739 [2024-11-27 08:45:44.298447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.739 [2024-11-27 08:45:44.298736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:47.739 [2024-11-27 08:45:44.299165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:47.739 [2024-11-27 08:45:44.299318] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:47.739 [2024-11-27 08:45:44.299711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:47.739 [2024-11-27 08:45:44.300059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:47.739 [2024-11-27 08:45:44.300194] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:47.739 [2024-11-27 08:45:44.300593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.739 "name": "raid_bdev1", 00:13:47.739 "uuid": "bf3cc69b-db9e-4cdd-b077-9f055e6af3d9", 00:13:47.739 "strip_size_kb": 64, 00:13:47.739 "state": "online", 00:13:47.739 "raid_level": "raid0", 00:13:47.739 "superblock": true, 00:13:47.739 "num_base_bdevs": 4, 00:13:47.739 "num_base_bdevs_discovered": 4, 00:13:47.739 "num_base_bdevs_operational": 4, 00:13:47.739 "base_bdevs_list": [ 00:13:47.739 { 00:13:47.739 "name": "BaseBdev1", 00:13:47.739 "uuid": "1b2252dc-0302-501c-97f5-1fd5c774e53f", 00:13:47.739 "is_configured": true, 00:13:47.739 "data_offset": 2048, 00:13:47.739 "data_size": 63488 00:13:47.739 }, 00:13:47.739 { 00:13:47.739 "name": "BaseBdev2", 00:13:47.739 "uuid": "d081f57c-a213-5ba2-9eea-dfd8598082c3", 00:13:47.739 "is_configured": true, 00:13:47.739 "data_offset": 2048, 00:13:47.739 "data_size": 63488 00:13:47.739 }, 00:13:47.739 { 00:13:47.739 "name": "BaseBdev3", 00:13:47.739 "uuid": "d56b5783-11ff-5830-aabb-52755096d330", 00:13:47.739 "is_configured": true, 00:13:47.739 "data_offset": 2048, 00:13:47.739 "data_size": 63488 00:13:47.739 }, 00:13:47.739 { 00:13:47.739 "name": "BaseBdev4", 00:13:47.739 "uuid": "9fe0ee5d-7c5c-59d3-a6fc-37cb58f0b969", 00:13:47.739 "is_configured": true, 00:13:47.739 "data_offset": 2048, 00:13:47.739 "data_size": 63488 00:13:47.739 } 00:13:47.739 ] 00:13:47.739 }' 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.739 08:45:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.334 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:48.334 08:45:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:48.334 [2024-11-27 08:45:44.941126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.295 "name": "raid_bdev1", 00:13:49.295 "uuid": "bf3cc69b-db9e-4cdd-b077-9f055e6af3d9", 00:13:49.295 "strip_size_kb": 64, 00:13:49.295 "state": "online", 00:13:49.295 "raid_level": "raid0", 00:13:49.295 "superblock": true, 00:13:49.295 "num_base_bdevs": 4, 00:13:49.295 "num_base_bdevs_discovered": 4, 00:13:49.295 "num_base_bdevs_operational": 4, 00:13:49.295 "base_bdevs_list": [ 00:13:49.295 { 00:13:49.295 "name": "BaseBdev1", 00:13:49.295 "uuid": "1b2252dc-0302-501c-97f5-1fd5c774e53f", 00:13:49.295 "is_configured": true, 00:13:49.295 "data_offset": 2048, 00:13:49.295 "data_size": 63488 00:13:49.295 }, 00:13:49.295 { 00:13:49.295 "name": "BaseBdev2", 00:13:49.295 "uuid": "d081f57c-a213-5ba2-9eea-dfd8598082c3", 00:13:49.295 "is_configured": true, 00:13:49.295 "data_offset": 2048, 00:13:49.295 "data_size": 63488 00:13:49.295 }, 00:13:49.295 { 00:13:49.295 "name": "BaseBdev3", 00:13:49.295 "uuid": "d56b5783-11ff-5830-aabb-52755096d330", 00:13:49.295 "is_configured": true, 00:13:49.295 "data_offset": 2048, 00:13:49.295 "data_size": 63488 00:13:49.295 }, 00:13:49.295 { 00:13:49.295 "name": "BaseBdev4", 00:13:49.295 "uuid": "9fe0ee5d-7c5c-59d3-a6fc-37cb58f0b969", 00:13:49.295 "is_configured": true, 00:13:49.295 "data_offset": 2048, 00:13:49.295 "data_size": 63488 00:13:49.295 } 00:13:49.295 ] 00:13:49.295 }' 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.295 08:45:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.861 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.861 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.861 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.861 [2024-11-27 08:45:46.396120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.861 [2024-11-27 08:45:46.396352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.861 [2024-11-27 08:45:46.399965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.861 [2024-11-27 08:45:46.400178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.861 [2024-11-27 08:45:46.400394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.861 [2024-11-27 08:45:46.400544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:49.861 { 00:13:49.861 "results": [ 00:13:49.861 { 00:13:49.861 "job": "raid_bdev1", 00:13:49.861 "core_mask": "0x1", 00:13:49.861 "workload": "randrw", 00:13:49.861 "percentage": 50, 00:13:49.861 "status": "finished", 00:13:49.861 "queue_depth": 1, 00:13:49.861 "io_size": 131072, 00:13:49.861 "runtime": 1.452501, 00:13:49.861 "iops": 9858.16877234508, 00:13:49.861 "mibps": 1232.271096543135, 00:13:49.861 "io_failed": 1, 00:13:49.861 "io_timeout": 0, 00:13:49.861 "avg_latency_us": 142.8483250380904, 00:13:49.861 "min_latency_us": 40.49454545454545, 00:13:49.861 "max_latency_us": 1936.290909090909 00:13:49.861 } 00:13:49.861 ], 00:13:49.861 "core_count": 1 00:13:49.861 } 00:13:49.861 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.861 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71250 00:13:49.861 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' -z 71250 ']' 00:13:49.861 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # kill -0 71250 00:13:49.861 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # uname 00:13:49.861 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:13:49.861 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 71250 00:13:49.861 killing process with pid 71250 00:13:49.861 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:13:49.861 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:13:49.861 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 71250' 00:13:49.861 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # kill 71250 00:13:49.861 [2024-11-27 08:45:46.441743] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:49.861 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@975 -- # wait 71250 00:13:50.118 [2024-11-27 08:45:46.754183] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.506 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xDgPz397lu 00:13:51.506 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:51.506 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:51.506 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:13:51.506 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:51.506 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:51.506 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:51.506 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:13:51.506 00:13:51.506 real 0m5.061s 00:13:51.506 user 0m6.166s 00:13:51.506 sys 0m0.688s 00:13:51.506 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:13:51.506 ************************************ 00:13:51.506 END TEST raid_read_error_test 00:13:51.506 ************************************ 00:13:51.506 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.506 08:45:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:13:51.506 08:45:47 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:13:51.506 08:45:47 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:13:51.506 08:45:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.506 ************************************ 00:13:51.506 START TEST raid_write_error_test 00:13:51.506 ************************************ 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test raid0 4 write 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:51.506 08:45:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:51.506 08:45:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lCrzeNKJvv 00:13:51.506 08:45:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71400 00:13:51.506 08:45:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71400 00:13:51.506 08:45:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # '[' -z 71400 ']' 00:13:51.506 08:45:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:51.506 08:45:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.506 08:45:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:13:51.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.506 08:45:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.506 08:45:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:13:51.506 08:45:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.506 [2024-11-27 08:45:48.116844] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:13:51.506 [2024-11-27 08:45:48.117042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71400 ] 00:13:51.764 [2024-11-27 08:45:48.305373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.764 [2024-11-27 08:45:48.452713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.021 [2024-11-27 08:45:48.677752] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.021 [2024-11-27 08:45:48.677817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.589 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:13:52.589 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@865 -- # return 0 00:13:52.589 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:52.589 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:52.590 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.590 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.590 BaseBdev1_malloc 00:13:52.590 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.590 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:52.590 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.590 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.590 true 00:13:52.590 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.590 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:52.590 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.590 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.590 [2024-11-27 08:45:49.265292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:52.590 [2024-11-27 08:45:49.265381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.590 [2024-11-27 08:45:49.265413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:52.590 [2024-11-27 08:45:49.265433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.590 [2024-11-27 08:45:49.268449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.591 [2024-11-27 08:45:49.268501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:52.591 BaseBdev1 00:13:52.591 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.591 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:52.591 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:52.591 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.591 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.591 BaseBdev2_malloc 00:13:52.591 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.591 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:52.591 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.591 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.591 true 00:13:52.591 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.591 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:52.591 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.591 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.591 [2024-11-27 08:45:49.325571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:52.591 [2024-11-27 08:45:49.325651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.591 [2024-11-27 08:45:49.325677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:52.592 [2024-11-27 08:45:49.325696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.592 [2024-11-27 08:45:49.328705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.592 [2024-11-27 08:45:49.328758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:52.592 BaseBdev2 00:13:52.592 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.592 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:52.592 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:52.592 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.592 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.854 BaseBdev3_malloc 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.854 true 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.854 [2024-11-27 08:45:49.399346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:52.854 [2024-11-27 08:45:49.399421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.854 [2024-11-27 08:45:49.399450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:52.854 [2024-11-27 08:45:49.399469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.854 [2024-11-27 08:45:49.402481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.854 [2024-11-27 08:45:49.402536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:52.854 BaseBdev3 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.854 BaseBdev4_malloc 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.854 true 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.854 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.855 [2024-11-27 08:45:49.463826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:52.855 [2024-11-27 08:45:49.463904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.855 [2024-11-27 08:45:49.463933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:52.855 [2024-11-27 08:45:49.463952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.855 [2024-11-27 08:45:49.466954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.855 [2024-11-27 08:45:49.467014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:52.855 BaseBdev4 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.855 [2024-11-27 08:45:49.471982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.855 [2024-11-27 08:45:49.474647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.855 [2024-11-27 08:45:49.474796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.855 [2024-11-27 08:45:49.474902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.855 [2024-11-27 08:45:49.475207] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:52.855 [2024-11-27 08:45:49.475248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:52.855 [2024-11-27 08:45:49.475587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:52.855 [2024-11-27 08:45:49.475824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:52.855 [2024-11-27 08:45:49.475856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:52.855 [2024-11-27 08:45:49.476103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.855 "name": "raid_bdev1", 00:13:52.855 "uuid": "47403c38-df27-4f4a-94c1-d1807bfb24c0", 00:13:52.855 "strip_size_kb": 64, 00:13:52.855 "state": "online", 00:13:52.855 "raid_level": "raid0", 00:13:52.855 "superblock": true, 00:13:52.855 "num_base_bdevs": 4, 00:13:52.855 "num_base_bdevs_discovered": 4, 00:13:52.855 "num_base_bdevs_operational": 4, 00:13:52.855 "base_bdevs_list": [ 00:13:52.855 { 00:13:52.855 "name": "BaseBdev1", 00:13:52.855 "uuid": "f4379b38-7e95-53c4-983a-20385fb980c0", 00:13:52.855 "is_configured": true, 00:13:52.855 "data_offset": 2048, 00:13:52.855 "data_size": 63488 00:13:52.855 }, 00:13:52.855 { 00:13:52.855 "name": "BaseBdev2", 00:13:52.855 "uuid": "d3df976f-be61-5c62-b057-1c0bff283922", 00:13:52.855 "is_configured": true, 00:13:52.855 "data_offset": 2048, 00:13:52.855 "data_size": 63488 00:13:52.855 }, 00:13:52.855 { 00:13:52.855 "name": "BaseBdev3", 00:13:52.855 "uuid": "0b2dd611-9379-5af8-9c73-ab4f89d154df", 00:13:52.855 "is_configured": true, 00:13:52.855 "data_offset": 2048, 00:13:52.855 "data_size": 63488 00:13:52.855 }, 00:13:52.855 { 00:13:52.855 "name": "BaseBdev4", 00:13:52.855 "uuid": "3e0d1d8e-4c45-51dd-ad73-dba62972bdcc", 00:13:52.855 "is_configured": true, 00:13:52.855 "data_offset": 2048, 00:13:52.855 "data_size": 63488 00:13:52.855 } 00:13:52.855 ] 00:13:52.855 }' 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.855 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.423 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:53.423 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:53.423 [2024-11-27 08:45:50.141783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:54.361 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:54.361 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.362 "name": "raid_bdev1", 00:13:54.362 "uuid": "47403c38-df27-4f4a-94c1-d1807bfb24c0", 00:13:54.362 "strip_size_kb": 64, 00:13:54.362 "state": "online", 00:13:54.362 "raid_level": "raid0", 00:13:54.362 "superblock": true, 00:13:54.362 "num_base_bdevs": 4, 00:13:54.362 "num_base_bdevs_discovered": 4, 00:13:54.362 "num_base_bdevs_operational": 4, 00:13:54.362 "base_bdevs_list": [ 00:13:54.362 { 00:13:54.362 "name": "BaseBdev1", 00:13:54.362 "uuid": "f4379b38-7e95-53c4-983a-20385fb980c0", 00:13:54.362 "is_configured": true, 00:13:54.362 "data_offset": 2048, 00:13:54.362 "data_size": 63488 00:13:54.362 }, 00:13:54.362 { 00:13:54.362 "name": "BaseBdev2", 00:13:54.362 "uuid": "d3df976f-be61-5c62-b057-1c0bff283922", 00:13:54.362 "is_configured": true, 00:13:54.362 "data_offset": 2048, 00:13:54.362 "data_size": 63488 00:13:54.362 }, 00:13:54.362 { 00:13:54.362 "name": "BaseBdev3", 00:13:54.362 "uuid": "0b2dd611-9379-5af8-9c73-ab4f89d154df", 00:13:54.362 "is_configured": true, 00:13:54.362 "data_offset": 2048, 00:13:54.362 "data_size": 63488 00:13:54.362 }, 00:13:54.362 { 00:13:54.362 "name": "BaseBdev4", 00:13:54.362 "uuid": "3e0d1d8e-4c45-51dd-ad73-dba62972bdcc", 00:13:54.362 "is_configured": true, 00:13:54.362 "data_offset": 2048, 00:13:54.362 "data_size": 63488 00:13:54.362 } 00:13:54.362 ] 00:13:54.362 }' 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.362 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.930 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:54.930 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.930 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.930 [2024-11-27 08:45:51.535718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:54.930 [2024-11-27 08:45:51.535763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:54.930 [2024-11-27 08:45:51.539156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.930 [2024-11-27 08:45:51.539232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.930 [2024-11-27 08:45:51.539300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:54.930 [2024-11-27 08:45:51.539320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:54.930 { 00:13:54.930 "results": [ 00:13:54.930 { 00:13:54.930 "job": "raid_bdev1", 00:13:54.930 "core_mask": "0x1", 00:13:54.930 "workload": "randrw", 00:13:54.930 "percentage": 50, 00:13:54.930 "status": "finished", 00:13:54.930 "queue_depth": 1, 00:13:54.930 "io_size": 131072, 00:13:54.930 "runtime": 1.391251, 00:13:54.930 "iops": 9911.942561047576, 00:13:54.930 "mibps": 1238.992820130947, 00:13:54.930 "io_failed": 1, 00:13:54.930 "io_timeout": 0, 00:13:54.930 "avg_latency_us": 142.05611301177976, 00:13:54.930 "min_latency_us": 42.35636363636364, 00:13:54.930 "max_latency_us": 1869.2654545454545 00:13:54.930 } 00:13:54.930 ], 00:13:54.930 "core_count": 1 00:13:54.930 } 00:13:54.930 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.930 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71400 00:13:54.930 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' -z 71400 ']' 00:13:54.930 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # kill -0 71400 00:13:54.930 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # uname 00:13:54.930 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:13:54.930 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 71400 00:13:54.930 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:13:54.930 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:13:54.930 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 71400' 00:13:54.930 killing process with pid 71400 00:13:54.930 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # kill 71400 00:13:54.930 [2024-11-27 08:45:51.573786] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:54.930 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@975 -- # wait 71400 00:13:55.190 [2024-11-27 08:45:51.879506] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:56.568 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:56.568 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lCrzeNKJvv 00:13:56.568 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:56.568 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:13:56.568 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:56.568 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:56.568 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:56.568 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:13:56.568 00:13:56.568 real 0m5.075s 00:13:56.568 user 0m6.226s 00:13:56.568 sys 0m0.682s 00:13:56.568 08:45:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:13:56.568 ************************************ 00:13:56.568 END TEST raid_write_error_test 00:13:56.568 ************************************ 00:13:56.568 08:45:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.568 08:45:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:56.568 08:45:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:13:56.568 08:45:53 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:13:56.568 08:45:53 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:13:56.568 08:45:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:56.568 ************************************ 00:13:56.568 START TEST raid_state_function_test 00:13:56.568 ************************************ 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # raid_state_function_test concat 4 false 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71545 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71545' 00:13:56.568 Process raid pid: 71545 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71545 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # '[' -z 71545 ']' 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:13:56.568 08:45:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.568 [2024-11-27 08:45:53.241977] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:13:56.568 [2024-11-27 08:45:53.242445] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.827 [2024-11-27 08:45:53.434508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.085 [2024-11-27 08:45:53.611271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.345 [2024-11-27 08:45:53.854484] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.345 [2024-11-27 08:45:53.854546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@865 -- # return 0 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.605 [2024-11-27 08:45:54.291415] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:57.605 [2024-11-27 08:45:54.291485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:57.605 [2024-11-27 08:45:54.291503] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:57.605 [2024-11-27 08:45:54.291521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:57.605 [2024-11-27 08:45:54.291531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:57.605 [2024-11-27 08:45:54.291545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:57.605 [2024-11-27 08:45:54.291556] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:57.605 [2024-11-27 08:45:54.291570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.605 "name": "Existed_Raid", 00:13:57.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.605 "strip_size_kb": 64, 00:13:57.605 "state": "configuring", 00:13:57.605 "raid_level": "concat", 00:13:57.605 "superblock": false, 00:13:57.605 "num_base_bdevs": 4, 00:13:57.605 "num_base_bdevs_discovered": 0, 00:13:57.605 "num_base_bdevs_operational": 4, 00:13:57.605 "base_bdevs_list": [ 00:13:57.605 { 00:13:57.605 "name": "BaseBdev1", 00:13:57.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.605 "is_configured": false, 00:13:57.605 "data_offset": 0, 00:13:57.605 "data_size": 0 00:13:57.605 }, 00:13:57.605 { 00:13:57.605 "name": "BaseBdev2", 00:13:57.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.605 "is_configured": false, 00:13:57.605 "data_offset": 0, 00:13:57.605 "data_size": 0 00:13:57.605 }, 00:13:57.605 { 00:13:57.605 "name": "BaseBdev3", 00:13:57.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.605 "is_configured": false, 00:13:57.605 "data_offset": 0, 00:13:57.605 "data_size": 0 00:13:57.605 }, 00:13:57.605 { 00:13:57.605 "name": "BaseBdev4", 00:13:57.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.605 "is_configured": false, 00:13:57.605 "data_offset": 0, 00:13:57.605 "data_size": 0 00:13:57.605 } 00:13:57.605 ] 00:13:57.605 }' 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.605 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.185 [2024-11-27 08:45:54.835509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:58.185 [2024-11-27 08:45:54.835708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.185 [2024-11-27 08:45:54.847490] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:58.185 [2024-11-27 08:45:54.847669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:58.185 [2024-11-27 08:45:54.847786] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:58.185 [2024-11-27 08:45:54.847846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:58.185 [2024-11-27 08:45:54.848058] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:58.185 [2024-11-27 08:45:54.848091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:58.185 [2024-11-27 08:45:54.848104] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:58.185 [2024-11-27 08:45:54.848119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.185 [2024-11-27 08:45:54.896419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.185 BaseBdev1 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.185 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.185 [ 00:13:58.185 { 00:13:58.185 "name": "BaseBdev1", 00:13:58.185 "aliases": [ 00:13:58.185 "337efd3f-0ab4-488b-b3a9-b74ab2b6fb30" 00:13:58.185 ], 00:13:58.185 "product_name": "Malloc disk", 00:13:58.185 "block_size": 512, 00:13:58.185 "num_blocks": 65536, 00:13:58.185 "uuid": "337efd3f-0ab4-488b-b3a9-b74ab2b6fb30", 00:13:58.185 "assigned_rate_limits": { 00:13:58.185 "rw_ios_per_sec": 0, 00:13:58.185 "rw_mbytes_per_sec": 0, 00:13:58.185 "r_mbytes_per_sec": 0, 00:13:58.185 "w_mbytes_per_sec": 0 00:13:58.185 }, 00:13:58.185 "claimed": true, 00:13:58.185 "claim_type": "exclusive_write", 00:13:58.185 "zoned": false, 00:13:58.185 "supported_io_types": { 00:13:58.185 "read": true, 00:13:58.185 "write": true, 00:13:58.185 "unmap": true, 00:13:58.185 "flush": true, 00:13:58.185 "reset": true, 00:13:58.185 "nvme_admin": false, 00:13:58.185 "nvme_io": false, 00:13:58.185 "nvme_io_md": false, 00:13:58.185 "write_zeroes": true, 00:13:58.185 "zcopy": true, 00:13:58.185 "get_zone_info": false, 00:13:58.185 "zone_management": false, 00:13:58.185 "zone_append": false, 00:13:58.185 "compare": false, 00:13:58.185 "compare_and_write": false, 00:13:58.185 "abort": true, 00:13:58.185 "seek_hole": false, 00:13:58.185 "seek_data": false, 00:13:58.185 "copy": true, 00:13:58.185 "nvme_iov_md": false 00:13:58.185 }, 00:13:58.185 "memory_domains": [ 00:13:58.185 { 00:13:58.185 "dma_device_id": "system", 00:13:58.185 "dma_device_type": 1 00:13:58.185 }, 00:13:58.185 { 00:13:58.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.186 "dma_device_type": 2 00:13:58.186 } 00:13:58.186 ], 00:13:58.186 "driver_specific": {} 00:13:58.186 } 00:13:58.186 ] 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.186 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.445 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.445 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.445 "name": "Existed_Raid", 00:13:58.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.445 "strip_size_kb": 64, 00:13:58.445 "state": "configuring", 00:13:58.445 "raid_level": "concat", 00:13:58.445 "superblock": false, 00:13:58.445 "num_base_bdevs": 4, 00:13:58.445 "num_base_bdevs_discovered": 1, 00:13:58.445 "num_base_bdevs_operational": 4, 00:13:58.445 "base_bdevs_list": [ 00:13:58.445 { 00:13:58.445 "name": "BaseBdev1", 00:13:58.445 "uuid": "337efd3f-0ab4-488b-b3a9-b74ab2b6fb30", 00:13:58.445 "is_configured": true, 00:13:58.445 "data_offset": 0, 00:13:58.445 "data_size": 65536 00:13:58.445 }, 00:13:58.445 { 00:13:58.445 "name": "BaseBdev2", 00:13:58.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.445 "is_configured": false, 00:13:58.445 "data_offset": 0, 00:13:58.445 "data_size": 0 00:13:58.445 }, 00:13:58.445 { 00:13:58.445 "name": "BaseBdev3", 00:13:58.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.445 "is_configured": false, 00:13:58.445 "data_offset": 0, 00:13:58.445 "data_size": 0 00:13:58.445 }, 00:13:58.445 { 00:13:58.445 "name": "BaseBdev4", 00:13:58.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.445 "is_configured": false, 00:13:58.445 "data_offset": 0, 00:13:58.445 "data_size": 0 00:13:58.445 } 00:13:58.445 ] 00:13:58.445 }' 00:13:58.445 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.445 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.704 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:58.704 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.704 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.704 [2024-11-27 08:45:55.448637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:58.704 [2024-11-27 08:45:55.448714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:58.704 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.705 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:58.705 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.705 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.705 [2024-11-27 08:45:55.460666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.963 [2024-11-27 08:45:55.463439] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:58.963 [2024-11-27 08:45:55.463612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:58.964 [2024-11-27 08:45:55.463731] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:58.964 [2024-11-27 08:45:55.463796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:58.964 [2024-11-27 08:45:55.463907] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:58.964 [2024-11-27 08:45:55.463965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.964 "name": "Existed_Raid", 00:13:58.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.964 "strip_size_kb": 64, 00:13:58.964 "state": "configuring", 00:13:58.964 "raid_level": "concat", 00:13:58.964 "superblock": false, 00:13:58.964 "num_base_bdevs": 4, 00:13:58.964 "num_base_bdevs_discovered": 1, 00:13:58.964 "num_base_bdevs_operational": 4, 00:13:58.964 "base_bdevs_list": [ 00:13:58.964 { 00:13:58.964 "name": "BaseBdev1", 00:13:58.964 "uuid": "337efd3f-0ab4-488b-b3a9-b74ab2b6fb30", 00:13:58.964 "is_configured": true, 00:13:58.964 "data_offset": 0, 00:13:58.964 "data_size": 65536 00:13:58.964 }, 00:13:58.964 { 00:13:58.964 "name": "BaseBdev2", 00:13:58.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.964 "is_configured": false, 00:13:58.964 "data_offset": 0, 00:13:58.964 "data_size": 0 00:13:58.964 }, 00:13:58.964 { 00:13:58.964 "name": "BaseBdev3", 00:13:58.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.964 "is_configured": false, 00:13:58.964 "data_offset": 0, 00:13:58.964 "data_size": 0 00:13:58.964 }, 00:13:58.964 { 00:13:58.964 "name": "BaseBdev4", 00:13:58.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.964 "is_configured": false, 00:13:58.964 "data_offset": 0, 00:13:58.964 "data_size": 0 00:13:58.964 } 00:13:58.964 ] 00:13:58.964 }' 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.964 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.531 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:59.531 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.531 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.531 [2024-11-27 08:45:56.030498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.531 BaseBdev2 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.531 [ 00:13:59.531 { 00:13:59.531 "name": "BaseBdev2", 00:13:59.531 "aliases": [ 00:13:59.531 "52a7a88d-69e9-4347-9b69-fe13d8e7a853" 00:13:59.531 ], 00:13:59.531 "product_name": "Malloc disk", 00:13:59.531 "block_size": 512, 00:13:59.531 "num_blocks": 65536, 00:13:59.531 "uuid": "52a7a88d-69e9-4347-9b69-fe13d8e7a853", 00:13:59.531 "assigned_rate_limits": { 00:13:59.531 "rw_ios_per_sec": 0, 00:13:59.531 "rw_mbytes_per_sec": 0, 00:13:59.531 "r_mbytes_per_sec": 0, 00:13:59.531 "w_mbytes_per_sec": 0 00:13:59.531 }, 00:13:59.531 "claimed": true, 00:13:59.531 "claim_type": "exclusive_write", 00:13:59.531 "zoned": false, 00:13:59.531 "supported_io_types": { 00:13:59.531 "read": true, 00:13:59.531 "write": true, 00:13:59.531 "unmap": true, 00:13:59.531 "flush": true, 00:13:59.531 "reset": true, 00:13:59.531 "nvme_admin": false, 00:13:59.531 "nvme_io": false, 00:13:59.531 "nvme_io_md": false, 00:13:59.531 "write_zeroes": true, 00:13:59.531 "zcopy": true, 00:13:59.531 "get_zone_info": false, 00:13:59.531 "zone_management": false, 00:13:59.531 "zone_append": false, 00:13:59.531 "compare": false, 00:13:59.531 "compare_and_write": false, 00:13:59.531 "abort": true, 00:13:59.531 "seek_hole": false, 00:13:59.531 "seek_data": false, 00:13:59.531 "copy": true, 00:13:59.531 "nvme_iov_md": false 00:13:59.531 }, 00:13:59.531 "memory_domains": [ 00:13:59.531 { 00:13:59.531 "dma_device_id": "system", 00:13:59.531 "dma_device_type": 1 00:13:59.531 }, 00:13:59.531 { 00:13:59.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.531 "dma_device_type": 2 00:13:59.531 } 00:13:59.531 ], 00:13:59.531 "driver_specific": {} 00:13:59.531 } 00:13:59.531 ] 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.531 "name": "Existed_Raid", 00:13:59.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.531 "strip_size_kb": 64, 00:13:59.531 "state": "configuring", 00:13:59.531 "raid_level": "concat", 00:13:59.531 "superblock": false, 00:13:59.531 "num_base_bdevs": 4, 00:13:59.531 "num_base_bdevs_discovered": 2, 00:13:59.531 "num_base_bdevs_operational": 4, 00:13:59.531 "base_bdevs_list": [ 00:13:59.531 { 00:13:59.531 "name": "BaseBdev1", 00:13:59.531 "uuid": "337efd3f-0ab4-488b-b3a9-b74ab2b6fb30", 00:13:59.531 "is_configured": true, 00:13:59.531 "data_offset": 0, 00:13:59.531 "data_size": 65536 00:13:59.531 }, 00:13:59.531 { 00:13:59.531 "name": "BaseBdev2", 00:13:59.531 "uuid": "52a7a88d-69e9-4347-9b69-fe13d8e7a853", 00:13:59.531 "is_configured": true, 00:13:59.531 "data_offset": 0, 00:13:59.531 "data_size": 65536 00:13:59.531 }, 00:13:59.531 { 00:13:59.531 "name": "BaseBdev3", 00:13:59.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.531 "is_configured": false, 00:13:59.531 "data_offset": 0, 00:13:59.531 "data_size": 0 00:13:59.531 }, 00:13:59.531 { 00:13:59.531 "name": "BaseBdev4", 00:13:59.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.531 "is_configured": false, 00:13:59.531 "data_offset": 0, 00:13:59.531 "data_size": 0 00:13:59.531 } 00:13:59.531 ] 00:13:59.531 }' 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.531 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.790 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:59.790 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.790 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.049 [2024-11-27 08:45:56.602168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:00.049 BaseBdev3 00:14:00.049 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.049 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:00.049 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:14:00.049 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:00.049 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:00.049 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:00.049 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:00.049 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:00.049 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.049 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.049 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.049 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.050 [ 00:14:00.050 { 00:14:00.050 "name": "BaseBdev3", 00:14:00.050 "aliases": [ 00:14:00.050 "01d39cc2-5310-4724-af7e-e353f71bb47a" 00:14:00.050 ], 00:14:00.050 "product_name": "Malloc disk", 00:14:00.050 "block_size": 512, 00:14:00.050 "num_blocks": 65536, 00:14:00.050 "uuid": "01d39cc2-5310-4724-af7e-e353f71bb47a", 00:14:00.050 "assigned_rate_limits": { 00:14:00.050 "rw_ios_per_sec": 0, 00:14:00.050 "rw_mbytes_per_sec": 0, 00:14:00.050 "r_mbytes_per_sec": 0, 00:14:00.050 "w_mbytes_per_sec": 0 00:14:00.050 }, 00:14:00.050 "claimed": true, 00:14:00.050 "claim_type": "exclusive_write", 00:14:00.050 "zoned": false, 00:14:00.050 "supported_io_types": { 00:14:00.050 "read": true, 00:14:00.050 "write": true, 00:14:00.050 "unmap": true, 00:14:00.050 "flush": true, 00:14:00.050 "reset": true, 00:14:00.050 "nvme_admin": false, 00:14:00.050 "nvme_io": false, 00:14:00.050 "nvme_io_md": false, 00:14:00.050 "write_zeroes": true, 00:14:00.050 "zcopy": true, 00:14:00.050 "get_zone_info": false, 00:14:00.050 "zone_management": false, 00:14:00.050 "zone_append": false, 00:14:00.050 "compare": false, 00:14:00.050 "compare_and_write": false, 00:14:00.050 "abort": true, 00:14:00.050 "seek_hole": false, 00:14:00.050 "seek_data": false, 00:14:00.050 "copy": true, 00:14:00.050 "nvme_iov_md": false 00:14:00.050 }, 00:14:00.050 "memory_domains": [ 00:14:00.050 { 00:14:00.050 "dma_device_id": "system", 00:14:00.050 "dma_device_type": 1 00:14:00.050 }, 00:14:00.050 { 00:14:00.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.050 "dma_device_type": 2 00:14:00.050 } 00:14:00.050 ], 00:14:00.050 "driver_specific": {} 00:14:00.050 } 00:14:00.050 ] 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.050 "name": "Existed_Raid", 00:14:00.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.050 "strip_size_kb": 64, 00:14:00.050 "state": "configuring", 00:14:00.050 "raid_level": "concat", 00:14:00.050 "superblock": false, 00:14:00.050 "num_base_bdevs": 4, 00:14:00.050 "num_base_bdevs_discovered": 3, 00:14:00.050 "num_base_bdevs_operational": 4, 00:14:00.050 "base_bdevs_list": [ 00:14:00.050 { 00:14:00.050 "name": "BaseBdev1", 00:14:00.050 "uuid": "337efd3f-0ab4-488b-b3a9-b74ab2b6fb30", 00:14:00.050 "is_configured": true, 00:14:00.050 "data_offset": 0, 00:14:00.050 "data_size": 65536 00:14:00.050 }, 00:14:00.050 { 00:14:00.050 "name": "BaseBdev2", 00:14:00.050 "uuid": "52a7a88d-69e9-4347-9b69-fe13d8e7a853", 00:14:00.050 "is_configured": true, 00:14:00.050 "data_offset": 0, 00:14:00.050 "data_size": 65536 00:14:00.050 }, 00:14:00.050 { 00:14:00.050 "name": "BaseBdev3", 00:14:00.050 "uuid": "01d39cc2-5310-4724-af7e-e353f71bb47a", 00:14:00.050 "is_configured": true, 00:14:00.050 "data_offset": 0, 00:14:00.050 "data_size": 65536 00:14:00.050 }, 00:14:00.050 { 00:14:00.050 "name": "BaseBdev4", 00:14:00.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.050 "is_configured": false, 00:14:00.050 "data_offset": 0, 00:14:00.050 "data_size": 0 00:14:00.050 } 00:14:00.050 ] 00:14:00.050 }' 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.050 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.619 [2024-11-27 08:45:57.201896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:00.619 [2024-11-27 08:45:57.202165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:00.619 [2024-11-27 08:45:57.202242] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:00.619 [2024-11-27 08:45:57.202658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:00.619 [2024-11-27 08:45:57.202902] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:00.619 [2024-11-27 08:45:57.202925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:00.619 [2024-11-27 08:45:57.203297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.619 BaseBdev4 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.619 [ 00:14:00.619 { 00:14:00.619 "name": "BaseBdev4", 00:14:00.619 "aliases": [ 00:14:00.619 "f72bbbd6-b31e-4045-93f9-9ffe807f780f" 00:14:00.619 ], 00:14:00.619 "product_name": "Malloc disk", 00:14:00.619 "block_size": 512, 00:14:00.619 "num_blocks": 65536, 00:14:00.619 "uuid": "f72bbbd6-b31e-4045-93f9-9ffe807f780f", 00:14:00.619 "assigned_rate_limits": { 00:14:00.619 "rw_ios_per_sec": 0, 00:14:00.619 "rw_mbytes_per_sec": 0, 00:14:00.619 "r_mbytes_per_sec": 0, 00:14:00.619 "w_mbytes_per_sec": 0 00:14:00.619 }, 00:14:00.619 "claimed": true, 00:14:00.619 "claim_type": "exclusive_write", 00:14:00.619 "zoned": false, 00:14:00.619 "supported_io_types": { 00:14:00.619 "read": true, 00:14:00.619 "write": true, 00:14:00.619 "unmap": true, 00:14:00.619 "flush": true, 00:14:00.619 "reset": true, 00:14:00.619 "nvme_admin": false, 00:14:00.619 "nvme_io": false, 00:14:00.619 "nvme_io_md": false, 00:14:00.619 "write_zeroes": true, 00:14:00.619 "zcopy": true, 00:14:00.619 "get_zone_info": false, 00:14:00.619 "zone_management": false, 00:14:00.619 "zone_append": false, 00:14:00.619 "compare": false, 00:14:00.619 "compare_and_write": false, 00:14:00.619 "abort": true, 00:14:00.619 "seek_hole": false, 00:14:00.619 "seek_data": false, 00:14:00.619 "copy": true, 00:14:00.619 "nvme_iov_md": false 00:14:00.619 }, 00:14:00.619 "memory_domains": [ 00:14:00.619 { 00:14:00.619 "dma_device_id": "system", 00:14:00.619 "dma_device_type": 1 00:14:00.619 }, 00:14:00.619 { 00:14:00.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.619 "dma_device_type": 2 00:14:00.619 } 00:14:00.619 ], 00:14:00.619 "driver_specific": {} 00:14:00.619 } 00:14:00.619 ] 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.619 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.619 "name": "Existed_Raid", 00:14:00.619 "uuid": "889c6f81-fcdb-453b-81b5-3472ccaa45c9", 00:14:00.619 "strip_size_kb": 64, 00:14:00.619 "state": "online", 00:14:00.619 "raid_level": "concat", 00:14:00.619 "superblock": false, 00:14:00.619 "num_base_bdevs": 4, 00:14:00.619 "num_base_bdevs_discovered": 4, 00:14:00.619 "num_base_bdevs_operational": 4, 00:14:00.619 "base_bdevs_list": [ 00:14:00.620 { 00:14:00.620 "name": "BaseBdev1", 00:14:00.620 "uuid": "337efd3f-0ab4-488b-b3a9-b74ab2b6fb30", 00:14:00.620 "is_configured": true, 00:14:00.620 "data_offset": 0, 00:14:00.620 "data_size": 65536 00:14:00.620 }, 00:14:00.620 { 00:14:00.620 "name": "BaseBdev2", 00:14:00.620 "uuid": "52a7a88d-69e9-4347-9b69-fe13d8e7a853", 00:14:00.620 "is_configured": true, 00:14:00.620 "data_offset": 0, 00:14:00.620 "data_size": 65536 00:14:00.620 }, 00:14:00.620 { 00:14:00.620 "name": "BaseBdev3", 00:14:00.620 "uuid": "01d39cc2-5310-4724-af7e-e353f71bb47a", 00:14:00.620 "is_configured": true, 00:14:00.620 "data_offset": 0, 00:14:00.620 "data_size": 65536 00:14:00.620 }, 00:14:00.620 { 00:14:00.620 "name": "BaseBdev4", 00:14:00.620 "uuid": "f72bbbd6-b31e-4045-93f9-9ffe807f780f", 00:14:00.620 "is_configured": true, 00:14:00.620 "data_offset": 0, 00:14:00.620 "data_size": 65536 00:14:00.620 } 00:14:00.620 ] 00:14:00.620 }' 00:14:00.620 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.620 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.187 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:01.187 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:01.187 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:01.187 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:01.187 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:01.187 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:01.187 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:01.187 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.187 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.187 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:01.187 [2024-11-27 08:45:57.762594] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.188 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.188 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:01.188 "name": "Existed_Raid", 00:14:01.188 "aliases": [ 00:14:01.188 "889c6f81-fcdb-453b-81b5-3472ccaa45c9" 00:14:01.188 ], 00:14:01.188 "product_name": "Raid Volume", 00:14:01.188 "block_size": 512, 00:14:01.188 "num_blocks": 262144, 00:14:01.188 "uuid": "889c6f81-fcdb-453b-81b5-3472ccaa45c9", 00:14:01.188 "assigned_rate_limits": { 00:14:01.188 "rw_ios_per_sec": 0, 00:14:01.188 "rw_mbytes_per_sec": 0, 00:14:01.188 "r_mbytes_per_sec": 0, 00:14:01.188 "w_mbytes_per_sec": 0 00:14:01.188 }, 00:14:01.188 "claimed": false, 00:14:01.188 "zoned": false, 00:14:01.188 "supported_io_types": { 00:14:01.188 "read": true, 00:14:01.188 "write": true, 00:14:01.188 "unmap": true, 00:14:01.188 "flush": true, 00:14:01.188 "reset": true, 00:14:01.188 "nvme_admin": false, 00:14:01.188 "nvme_io": false, 00:14:01.188 "nvme_io_md": false, 00:14:01.188 "write_zeroes": true, 00:14:01.188 "zcopy": false, 00:14:01.188 "get_zone_info": false, 00:14:01.188 "zone_management": false, 00:14:01.188 "zone_append": false, 00:14:01.188 "compare": false, 00:14:01.188 "compare_and_write": false, 00:14:01.188 "abort": false, 00:14:01.188 "seek_hole": false, 00:14:01.188 "seek_data": false, 00:14:01.188 "copy": false, 00:14:01.188 "nvme_iov_md": false 00:14:01.188 }, 00:14:01.188 "memory_domains": [ 00:14:01.188 { 00:14:01.188 "dma_device_id": "system", 00:14:01.188 "dma_device_type": 1 00:14:01.188 }, 00:14:01.188 { 00:14:01.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.188 "dma_device_type": 2 00:14:01.188 }, 00:14:01.188 { 00:14:01.188 "dma_device_id": "system", 00:14:01.188 "dma_device_type": 1 00:14:01.188 }, 00:14:01.188 { 00:14:01.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.188 "dma_device_type": 2 00:14:01.188 }, 00:14:01.188 { 00:14:01.188 "dma_device_id": "system", 00:14:01.188 "dma_device_type": 1 00:14:01.188 }, 00:14:01.188 { 00:14:01.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.188 "dma_device_type": 2 00:14:01.188 }, 00:14:01.188 { 00:14:01.188 "dma_device_id": "system", 00:14:01.188 "dma_device_type": 1 00:14:01.188 }, 00:14:01.188 { 00:14:01.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.188 "dma_device_type": 2 00:14:01.188 } 00:14:01.188 ], 00:14:01.188 "driver_specific": { 00:14:01.188 "raid": { 00:14:01.188 "uuid": "889c6f81-fcdb-453b-81b5-3472ccaa45c9", 00:14:01.188 "strip_size_kb": 64, 00:14:01.188 "state": "online", 00:14:01.188 "raid_level": "concat", 00:14:01.188 "superblock": false, 00:14:01.188 "num_base_bdevs": 4, 00:14:01.188 "num_base_bdevs_discovered": 4, 00:14:01.188 "num_base_bdevs_operational": 4, 00:14:01.188 "base_bdevs_list": [ 00:14:01.188 { 00:14:01.188 "name": "BaseBdev1", 00:14:01.188 "uuid": "337efd3f-0ab4-488b-b3a9-b74ab2b6fb30", 00:14:01.188 "is_configured": true, 00:14:01.188 "data_offset": 0, 00:14:01.188 "data_size": 65536 00:14:01.188 }, 00:14:01.188 { 00:14:01.188 "name": "BaseBdev2", 00:14:01.188 "uuid": "52a7a88d-69e9-4347-9b69-fe13d8e7a853", 00:14:01.188 "is_configured": true, 00:14:01.188 "data_offset": 0, 00:14:01.188 "data_size": 65536 00:14:01.188 }, 00:14:01.188 { 00:14:01.188 "name": "BaseBdev3", 00:14:01.188 "uuid": "01d39cc2-5310-4724-af7e-e353f71bb47a", 00:14:01.188 "is_configured": true, 00:14:01.188 "data_offset": 0, 00:14:01.188 "data_size": 65536 00:14:01.188 }, 00:14:01.188 { 00:14:01.188 "name": "BaseBdev4", 00:14:01.188 "uuid": "f72bbbd6-b31e-4045-93f9-9ffe807f780f", 00:14:01.188 "is_configured": true, 00:14:01.188 "data_offset": 0, 00:14:01.188 "data_size": 65536 00:14:01.188 } 00:14:01.188 ] 00:14:01.188 } 00:14:01.188 } 00:14:01.188 }' 00:14:01.188 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:01.188 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:01.188 BaseBdev2 00:14:01.188 BaseBdev3 00:14:01.188 BaseBdev4' 00:14:01.188 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.188 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:01.188 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.188 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.188 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:01.188 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.188 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.447 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.447 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.447 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.447 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.447 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:01.447 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.447 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.447 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.447 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.447 [2024-11-27 08:45:58.162268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:01.447 [2024-11-27 08:45:58.162311] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.447 [2024-11-27 08:45:58.162410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.706 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.707 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.707 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.707 "name": "Existed_Raid", 00:14:01.707 "uuid": "889c6f81-fcdb-453b-81b5-3472ccaa45c9", 00:14:01.707 "strip_size_kb": 64, 00:14:01.707 "state": "offline", 00:14:01.707 "raid_level": "concat", 00:14:01.707 "superblock": false, 00:14:01.707 "num_base_bdevs": 4, 00:14:01.707 "num_base_bdevs_discovered": 3, 00:14:01.707 "num_base_bdevs_operational": 3, 00:14:01.707 "base_bdevs_list": [ 00:14:01.707 { 00:14:01.707 "name": null, 00:14:01.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.707 "is_configured": false, 00:14:01.707 "data_offset": 0, 00:14:01.707 "data_size": 65536 00:14:01.707 }, 00:14:01.707 { 00:14:01.707 "name": "BaseBdev2", 00:14:01.707 "uuid": "52a7a88d-69e9-4347-9b69-fe13d8e7a853", 00:14:01.707 "is_configured": true, 00:14:01.707 "data_offset": 0, 00:14:01.707 "data_size": 65536 00:14:01.707 }, 00:14:01.707 { 00:14:01.707 "name": "BaseBdev3", 00:14:01.707 "uuid": "01d39cc2-5310-4724-af7e-e353f71bb47a", 00:14:01.707 "is_configured": true, 00:14:01.707 "data_offset": 0, 00:14:01.707 "data_size": 65536 00:14:01.707 }, 00:14:01.707 { 00:14:01.707 "name": "BaseBdev4", 00:14:01.707 "uuid": "f72bbbd6-b31e-4045-93f9-9ffe807f780f", 00:14:01.707 "is_configured": true, 00:14:01.707 "data_offset": 0, 00:14:01.707 "data_size": 65536 00:14:01.707 } 00:14:01.707 ] 00:14:01.707 }' 00:14:01.707 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.707 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.273 [2024-11-27 08:45:58.825118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.273 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.273 [2024-11-27 08:45:58.972111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.532 [2024-11-27 08:45:59.124791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:02.532 [2024-11-27 08:45:59.124864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.532 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 BaseBdev2 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 [ 00:14:02.793 { 00:14:02.793 "name": "BaseBdev2", 00:14:02.793 "aliases": [ 00:14:02.793 "4f9985ee-6d5e-4a0d-98b3-3eb866fa716f" 00:14:02.793 ], 00:14:02.793 "product_name": "Malloc disk", 00:14:02.793 "block_size": 512, 00:14:02.793 "num_blocks": 65536, 00:14:02.793 "uuid": "4f9985ee-6d5e-4a0d-98b3-3eb866fa716f", 00:14:02.793 "assigned_rate_limits": { 00:14:02.793 "rw_ios_per_sec": 0, 00:14:02.793 "rw_mbytes_per_sec": 0, 00:14:02.793 "r_mbytes_per_sec": 0, 00:14:02.793 "w_mbytes_per_sec": 0 00:14:02.793 }, 00:14:02.793 "claimed": false, 00:14:02.793 "zoned": false, 00:14:02.793 "supported_io_types": { 00:14:02.793 "read": true, 00:14:02.793 "write": true, 00:14:02.793 "unmap": true, 00:14:02.793 "flush": true, 00:14:02.793 "reset": true, 00:14:02.793 "nvme_admin": false, 00:14:02.793 "nvme_io": false, 00:14:02.793 "nvme_io_md": false, 00:14:02.793 "write_zeroes": true, 00:14:02.793 "zcopy": true, 00:14:02.793 "get_zone_info": false, 00:14:02.793 "zone_management": false, 00:14:02.793 "zone_append": false, 00:14:02.793 "compare": false, 00:14:02.793 "compare_and_write": false, 00:14:02.793 "abort": true, 00:14:02.793 "seek_hole": false, 00:14:02.793 "seek_data": false, 00:14:02.793 "copy": true, 00:14:02.793 "nvme_iov_md": false 00:14:02.793 }, 00:14:02.793 "memory_domains": [ 00:14:02.793 { 00:14:02.793 "dma_device_id": "system", 00:14:02.793 "dma_device_type": 1 00:14:02.793 }, 00:14:02.793 { 00:14:02.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.793 "dma_device_type": 2 00:14:02.793 } 00:14:02.793 ], 00:14:02.793 "driver_specific": {} 00:14:02.793 } 00:14:02.793 ] 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 BaseBdev3 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 [ 00:14:02.793 { 00:14:02.793 "name": "BaseBdev3", 00:14:02.793 "aliases": [ 00:14:02.793 "97e2d073-7582-452f-a34b-e90f28a1cba0" 00:14:02.793 ], 00:14:02.793 "product_name": "Malloc disk", 00:14:02.793 "block_size": 512, 00:14:02.793 "num_blocks": 65536, 00:14:02.793 "uuid": "97e2d073-7582-452f-a34b-e90f28a1cba0", 00:14:02.793 "assigned_rate_limits": { 00:14:02.793 "rw_ios_per_sec": 0, 00:14:02.793 "rw_mbytes_per_sec": 0, 00:14:02.793 "r_mbytes_per_sec": 0, 00:14:02.793 "w_mbytes_per_sec": 0 00:14:02.793 }, 00:14:02.793 "claimed": false, 00:14:02.793 "zoned": false, 00:14:02.793 "supported_io_types": { 00:14:02.793 "read": true, 00:14:02.793 "write": true, 00:14:02.793 "unmap": true, 00:14:02.793 "flush": true, 00:14:02.793 "reset": true, 00:14:02.793 "nvme_admin": false, 00:14:02.793 "nvme_io": false, 00:14:02.793 "nvme_io_md": false, 00:14:02.793 "write_zeroes": true, 00:14:02.793 "zcopy": true, 00:14:02.793 "get_zone_info": false, 00:14:02.793 "zone_management": false, 00:14:02.793 "zone_append": false, 00:14:02.793 "compare": false, 00:14:02.793 "compare_and_write": false, 00:14:02.793 "abort": true, 00:14:02.793 "seek_hole": false, 00:14:02.793 "seek_data": false, 00:14:02.793 "copy": true, 00:14:02.793 "nvme_iov_md": false 00:14:02.793 }, 00:14:02.793 "memory_domains": [ 00:14:02.793 { 00:14:02.793 "dma_device_id": "system", 00:14:02.793 "dma_device_type": 1 00:14:02.793 }, 00:14:02.793 { 00:14:02.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.793 "dma_device_type": 2 00:14:02.793 } 00:14:02.793 ], 00:14:02.793 "driver_specific": {} 00:14:02.793 } 00:14:02.793 ] 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.794 BaseBdev4 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.794 [ 00:14:02.794 { 00:14:02.794 "name": "BaseBdev4", 00:14:02.794 "aliases": [ 00:14:02.794 "04062e65-3245-4ad9-8cce-15c179f05eb9" 00:14:02.794 ], 00:14:02.794 "product_name": "Malloc disk", 00:14:02.794 "block_size": 512, 00:14:02.794 "num_blocks": 65536, 00:14:02.794 "uuid": "04062e65-3245-4ad9-8cce-15c179f05eb9", 00:14:02.794 "assigned_rate_limits": { 00:14:02.794 "rw_ios_per_sec": 0, 00:14:02.794 "rw_mbytes_per_sec": 0, 00:14:02.794 "r_mbytes_per_sec": 0, 00:14:02.794 "w_mbytes_per_sec": 0 00:14:02.794 }, 00:14:02.794 "claimed": false, 00:14:02.794 "zoned": false, 00:14:02.794 "supported_io_types": { 00:14:02.794 "read": true, 00:14:02.794 "write": true, 00:14:02.794 "unmap": true, 00:14:02.794 "flush": true, 00:14:02.794 "reset": true, 00:14:02.794 "nvme_admin": false, 00:14:02.794 "nvme_io": false, 00:14:02.794 "nvme_io_md": false, 00:14:02.794 "write_zeroes": true, 00:14:02.794 "zcopy": true, 00:14:02.794 "get_zone_info": false, 00:14:02.794 "zone_management": false, 00:14:02.794 "zone_append": false, 00:14:02.794 "compare": false, 00:14:02.794 "compare_and_write": false, 00:14:02.794 "abort": true, 00:14:02.794 "seek_hole": false, 00:14:02.794 "seek_data": false, 00:14:02.794 "copy": true, 00:14:02.794 "nvme_iov_md": false 00:14:02.794 }, 00:14:02.794 "memory_domains": [ 00:14:02.794 { 00:14:02.794 "dma_device_id": "system", 00:14:02.794 "dma_device_type": 1 00:14:02.794 }, 00:14:02.794 { 00:14:02.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.794 "dma_device_type": 2 00:14:02.794 } 00:14:02.794 ], 00:14:02.794 "driver_specific": {} 00:14:02.794 } 00:14:02.794 ] 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.794 [2024-11-27 08:45:59.508145] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:02.794 [2024-11-27 08:45:59.508207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:02.794 [2024-11-27 08:45:59.508242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.794 [2024-11-27 08:45:59.510860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.794 [2024-11-27 08:45:59.511086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.794 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.053 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.053 "name": "Existed_Raid", 00:14:03.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.053 "strip_size_kb": 64, 00:14:03.053 "state": "configuring", 00:14:03.053 "raid_level": "concat", 00:14:03.053 "superblock": false, 00:14:03.053 "num_base_bdevs": 4, 00:14:03.053 "num_base_bdevs_discovered": 3, 00:14:03.053 "num_base_bdevs_operational": 4, 00:14:03.053 "base_bdevs_list": [ 00:14:03.053 { 00:14:03.053 "name": "BaseBdev1", 00:14:03.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.053 "is_configured": false, 00:14:03.053 "data_offset": 0, 00:14:03.053 "data_size": 0 00:14:03.053 }, 00:14:03.053 { 00:14:03.053 "name": "BaseBdev2", 00:14:03.053 "uuid": "4f9985ee-6d5e-4a0d-98b3-3eb866fa716f", 00:14:03.053 "is_configured": true, 00:14:03.053 "data_offset": 0, 00:14:03.053 "data_size": 65536 00:14:03.053 }, 00:14:03.053 { 00:14:03.053 "name": "BaseBdev3", 00:14:03.053 "uuid": "97e2d073-7582-452f-a34b-e90f28a1cba0", 00:14:03.053 "is_configured": true, 00:14:03.053 "data_offset": 0, 00:14:03.053 "data_size": 65536 00:14:03.053 }, 00:14:03.053 { 00:14:03.053 "name": "BaseBdev4", 00:14:03.053 "uuid": "04062e65-3245-4ad9-8cce-15c179f05eb9", 00:14:03.053 "is_configured": true, 00:14:03.053 "data_offset": 0, 00:14:03.053 "data_size": 65536 00:14:03.053 } 00:14:03.053 ] 00:14:03.053 }' 00:14:03.053 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.053 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.313 [2024-11-27 08:46:00.028332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.313 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.572 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.572 "name": "Existed_Raid", 00:14:03.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.572 "strip_size_kb": 64, 00:14:03.572 "state": "configuring", 00:14:03.572 "raid_level": "concat", 00:14:03.572 "superblock": false, 00:14:03.572 "num_base_bdevs": 4, 00:14:03.572 "num_base_bdevs_discovered": 2, 00:14:03.572 "num_base_bdevs_operational": 4, 00:14:03.572 "base_bdevs_list": [ 00:14:03.572 { 00:14:03.572 "name": "BaseBdev1", 00:14:03.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.572 "is_configured": false, 00:14:03.572 "data_offset": 0, 00:14:03.572 "data_size": 0 00:14:03.572 }, 00:14:03.572 { 00:14:03.572 "name": null, 00:14:03.572 "uuid": "4f9985ee-6d5e-4a0d-98b3-3eb866fa716f", 00:14:03.572 "is_configured": false, 00:14:03.572 "data_offset": 0, 00:14:03.572 "data_size": 65536 00:14:03.572 }, 00:14:03.572 { 00:14:03.572 "name": "BaseBdev3", 00:14:03.572 "uuid": "97e2d073-7582-452f-a34b-e90f28a1cba0", 00:14:03.572 "is_configured": true, 00:14:03.572 "data_offset": 0, 00:14:03.572 "data_size": 65536 00:14:03.572 }, 00:14:03.572 { 00:14:03.572 "name": "BaseBdev4", 00:14:03.572 "uuid": "04062e65-3245-4ad9-8cce-15c179f05eb9", 00:14:03.572 "is_configured": true, 00:14:03.572 "data_offset": 0, 00:14:03.572 "data_size": 65536 00:14:03.572 } 00:14:03.572 ] 00:14:03.572 }' 00:14:03.572 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.572 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.831 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.831 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.831 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.831 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:03.831 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.831 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:03.831 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:03.831 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.831 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.090 [2024-11-27 08:46:00.593460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.090 BaseBdev1 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.090 [ 00:14:04.090 { 00:14:04.090 "name": "BaseBdev1", 00:14:04.090 "aliases": [ 00:14:04.090 "3f88a879-49e5-4a09-a0e7-491ecdc172d3" 00:14:04.090 ], 00:14:04.090 "product_name": "Malloc disk", 00:14:04.090 "block_size": 512, 00:14:04.090 "num_blocks": 65536, 00:14:04.090 "uuid": "3f88a879-49e5-4a09-a0e7-491ecdc172d3", 00:14:04.090 "assigned_rate_limits": { 00:14:04.090 "rw_ios_per_sec": 0, 00:14:04.090 "rw_mbytes_per_sec": 0, 00:14:04.090 "r_mbytes_per_sec": 0, 00:14:04.090 "w_mbytes_per_sec": 0 00:14:04.090 }, 00:14:04.090 "claimed": true, 00:14:04.090 "claim_type": "exclusive_write", 00:14:04.090 "zoned": false, 00:14:04.090 "supported_io_types": { 00:14:04.090 "read": true, 00:14:04.090 "write": true, 00:14:04.090 "unmap": true, 00:14:04.090 "flush": true, 00:14:04.090 "reset": true, 00:14:04.090 "nvme_admin": false, 00:14:04.090 "nvme_io": false, 00:14:04.090 "nvme_io_md": false, 00:14:04.090 "write_zeroes": true, 00:14:04.090 "zcopy": true, 00:14:04.090 "get_zone_info": false, 00:14:04.090 "zone_management": false, 00:14:04.090 "zone_append": false, 00:14:04.090 "compare": false, 00:14:04.090 "compare_and_write": false, 00:14:04.090 "abort": true, 00:14:04.090 "seek_hole": false, 00:14:04.090 "seek_data": false, 00:14:04.090 "copy": true, 00:14:04.090 "nvme_iov_md": false 00:14:04.090 }, 00:14:04.090 "memory_domains": [ 00:14:04.090 { 00:14:04.090 "dma_device_id": "system", 00:14:04.090 "dma_device_type": 1 00:14:04.090 }, 00:14:04.090 { 00:14:04.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.090 "dma_device_type": 2 00:14:04.090 } 00:14:04.090 ], 00:14:04.090 "driver_specific": {} 00:14:04.090 } 00:14:04.090 ] 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.090 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.091 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.091 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.091 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.091 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.091 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.091 "name": "Existed_Raid", 00:14:04.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.091 "strip_size_kb": 64, 00:14:04.091 "state": "configuring", 00:14:04.091 "raid_level": "concat", 00:14:04.091 "superblock": false, 00:14:04.091 "num_base_bdevs": 4, 00:14:04.091 "num_base_bdevs_discovered": 3, 00:14:04.091 "num_base_bdevs_operational": 4, 00:14:04.091 "base_bdevs_list": [ 00:14:04.091 { 00:14:04.091 "name": "BaseBdev1", 00:14:04.091 "uuid": "3f88a879-49e5-4a09-a0e7-491ecdc172d3", 00:14:04.091 "is_configured": true, 00:14:04.091 "data_offset": 0, 00:14:04.091 "data_size": 65536 00:14:04.091 }, 00:14:04.091 { 00:14:04.091 "name": null, 00:14:04.091 "uuid": "4f9985ee-6d5e-4a0d-98b3-3eb866fa716f", 00:14:04.091 "is_configured": false, 00:14:04.091 "data_offset": 0, 00:14:04.091 "data_size": 65536 00:14:04.091 }, 00:14:04.091 { 00:14:04.091 "name": "BaseBdev3", 00:14:04.091 "uuid": "97e2d073-7582-452f-a34b-e90f28a1cba0", 00:14:04.091 "is_configured": true, 00:14:04.091 "data_offset": 0, 00:14:04.091 "data_size": 65536 00:14:04.091 }, 00:14:04.091 { 00:14:04.091 "name": "BaseBdev4", 00:14:04.091 "uuid": "04062e65-3245-4ad9-8cce-15c179f05eb9", 00:14:04.091 "is_configured": true, 00:14:04.091 "data_offset": 0, 00:14:04.091 "data_size": 65536 00:14:04.091 } 00:14:04.091 ] 00:14:04.091 }' 00:14:04.091 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.091 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.659 [2024-11-27 08:46:01.169724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.659 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.659 "name": "Existed_Raid", 00:14:04.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.659 "strip_size_kb": 64, 00:14:04.659 "state": "configuring", 00:14:04.659 "raid_level": "concat", 00:14:04.659 "superblock": false, 00:14:04.659 "num_base_bdevs": 4, 00:14:04.659 "num_base_bdevs_discovered": 2, 00:14:04.659 "num_base_bdevs_operational": 4, 00:14:04.659 "base_bdevs_list": [ 00:14:04.659 { 00:14:04.659 "name": "BaseBdev1", 00:14:04.659 "uuid": "3f88a879-49e5-4a09-a0e7-491ecdc172d3", 00:14:04.659 "is_configured": true, 00:14:04.659 "data_offset": 0, 00:14:04.659 "data_size": 65536 00:14:04.659 }, 00:14:04.659 { 00:14:04.659 "name": null, 00:14:04.659 "uuid": "4f9985ee-6d5e-4a0d-98b3-3eb866fa716f", 00:14:04.659 "is_configured": false, 00:14:04.659 "data_offset": 0, 00:14:04.659 "data_size": 65536 00:14:04.659 }, 00:14:04.659 { 00:14:04.659 "name": null, 00:14:04.659 "uuid": "97e2d073-7582-452f-a34b-e90f28a1cba0", 00:14:04.659 "is_configured": false, 00:14:04.659 "data_offset": 0, 00:14:04.659 "data_size": 65536 00:14:04.659 }, 00:14:04.659 { 00:14:04.659 "name": "BaseBdev4", 00:14:04.659 "uuid": "04062e65-3245-4ad9-8cce-15c179f05eb9", 00:14:04.659 "is_configured": true, 00:14:04.659 "data_offset": 0, 00:14:04.659 "data_size": 65536 00:14:04.659 } 00:14:04.660 ] 00:14:04.660 }' 00:14:04.660 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.660 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.226 [2024-11-27 08:46:01.741878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.226 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.226 "name": "Existed_Raid", 00:14:05.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.226 "strip_size_kb": 64, 00:14:05.226 "state": "configuring", 00:14:05.226 "raid_level": "concat", 00:14:05.226 "superblock": false, 00:14:05.226 "num_base_bdevs": 4, 00:14:05.226 "num_base_bdevs_discovered": 3, 00:14:05.226 "num_base_bdevs_operational": 4, 00:14:05.226 "base_bdevs_list": [ 00:14:05.226 { 00:14:05.226 "name": "BaseBdev1", 00:14:05.226 "uuid": "3f88a879-49e5-4a09-a0e7-491ecdc172d3", 00:14:05.226 "is_configured": true, 00:14:05.226 "data_offset": 0, 00:14:05.226 "data_size": 65536 00:14:05.226 }, 00:14:05.226 { 00:14:05.226 "name": null, 00:14:05.226 "uuid": "4f9985ee-6d5e-4a0d-98b3-3eb866fa716f", 00:14:05.226 "is_configured": false, 00:14:05.226 "data_offset": 0, 00:14:05.226 "data_size": 65536 00:14:05.226 }, 00:14:05.226 { 00:14:05.226 "name": "BaseBdev3", 00:14:05.226 "uuid": "97e2d073-7582-452f-a34b-e90f28a1cba0", 00:14:05.226 "is_configured": true, 00:14:05.226 "data_offset": 0, 00:14:05.226 "data_size": 65536 00:14:05.226 }, 00:14:05.226 { 00:14:05.226 "name": "BaseBdev4", 00:14:05.226 "uuid": "04062e65-3245-4ad9-8cce-15c179f05eb9", 00:14:05.227 "is_configured": true, 00:14:05.227 "data_offset": 0, 00:14:05.227 "data_size": 65536 00:14:05.227 } 00:14:05.227 ] 00:14:05.227 }' 00:14:05.227 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.227 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.794 [2024-11-27 08:46:02.310046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.794 "name": "Existed_Raid", 00:14:05.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.794 "strip_size_kb": 64, 00:14:05.794 "state": "configuring", 00:14:05.794 "raid_level": "concat", 00:14:05.794 "superblock": false, 00:14:05.794 "num_base_bdevs": 4, 00:14:05.794 "num_base_bdevs_discovered": 2, 00:14:05.794 "num_base_bdevs_operational": 4, 00:14:05.794 "base_bdevs_list": [ 00:14:05.794 { 00:14:05.794 "name": null, 00:14:05.794 "uuid": "3f88a879-49e5-4a09-a0e7-491ecdc172d3", 00:14:05.794 "is_configured": false, 00:14:05.794 "data_offset": 0, 00:14:05.794 "data_size": 65536 00:14:05.794 }, 00:14:05.794 { 00:14:05.794 "name": null, 00:14:05.794 "uuid": "4f9985ee-6d5e-4a0d-98b3-3eb866fa716f", 00:14:05.794 "is_configured": false, 00:14:05.794 "data_offset": 0, 00:14:05.794 "data_size": 65536 00:14:05.794 }, 00:14:05.794 { 00:14:05.794 "name": "BaseBdev3", 00:14:05.794 "uuid": "97e2d073-7582-452f-a34b-e90f28a1cba0", 00:14:05.794 "is_configured": true, 00:14:05.794 "data_offset": 0, 00:14:05.794 "data_size": 65536 00:14:05.794 }, 00:14:05.794 { 00:14:05.794 "name": "BaseBdev4", 00:14:05.794 "uuid": "04062e65-3245-4ad9-8cce-15c179f05eb9", 00:14:05.794 "is_configured": true, 00:14:05.794 "data_offset": 0, 00:14:05.794 "data_size": 65536 00:14:05.794 } 00:14:05.794 ] 00:14:05.794 }' 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.794 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.361 [2024-11-27 08:46:02.943568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.361 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.361 "name": "Existed_Raid", 00:14:06.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.361 "strip_size_kb": 64, 00:14:06.361 "state": "configuring", 00:14:06.361 "raid_level": "concat", 00:14:06.361 "superblock": false, 00:14:06.361 "num_base_bdevs": 4, 00:14:06.361 "num_base_bdevs_discovered": 3, 00:14:06.361 "num_base_bdevs_operational": 4, 00:14:06.361 "base_bdevs_list": [ 00:14:06.361 { 00:14:06.361 "name": null, 00:14:06.361 "uuid": "3f88a879-49e5-4a09-a0e7-491ecdc172d3", 00:14:06.361 "is_configured": false, 00:14:06.361 "data_offset": 0, 00:14:06.361 "data_size": 65536 00:14:06.361 }, 00:14:06.361 { 00:14:06.361 "name": "BaseBdev2", 00:14:06.361 "uuid": "4f9985ee-6d5e-4a0d-98b3-3eb866fa716f", 00:14:06.361 "is_configured": true, 00:14:06.361 "data_offset": 0, 00:14:06.361 "data_size": 65536 00:14:06.362 }, 00:14:06.362 { 00:14:06.362 "name": "BaseBdev3", 00:14:06.362 "uuid": "97e2d073-7582-452f-a34b-e90f28a1cba0", 00:14:06.362 "is_configured": true, 00:14:06.362 "data_offset": 0, 00:14:06.362 "data_size": 65536 00:14:06.362 }, 00:14:06.362 { 00:14:06.362 "name": "BaseBdev4", 00:14:06.362 "uuid": "04062e65-3245-4ad9-8cce-15c179f05eb9", 00:14:06.362 "is_configured": true, 00:14:06.362 "data_offset": 0, 00:14:06.362 "data_size": 65536 00:14:06.362 } 00:14:06.362 ] 00:14:06.362 }' 00:14:06.362 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.362 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3f88a879-49e5-4a09-a0e7-491ecdc172d3 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.928 [2024-11-27 08:46:03.613430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:06.928 [2024-11-27 08:46:03.613512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:06.928 [2024-11-27 08:46:03.613525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:06.928 [2024-11-27 08:46:03.613876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:06.928 [2024-11-27 08:46:03.614087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:06.928 [2024-11-27 08:46:03.614109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:06.928 [2024-11-27 08:46:03.614488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.928 NewBaseBdev 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.928 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.928 [ 00:14:06.928 { 00:14:06.928 "name": "NewBaseBdev", 00:14:06.928 "aliases": [ 00:14:06.928 "3f88a879-49e5-4a09-a0e7-491ecdc172d3" 00:14:06.928 ], 00:14:06.928 "product_name": "Malloc disk", 00:14:06.928 "block_size": 512, 00:14:06.928 "num_blocks": 65536, 00:14:06.928 "uuid": "3f88a879-49e5-4a09-a0e7-491ecdc172d3", 00:14:06.928 "assigned_rate_limits": { 00:14:06.928 "rw_ios_per_sec": 0, 00:14:06.928 "rw_mbytes_per_sec": 0, 00:14:06.928 "r_mbytes_per_sec": 0, 00:14:06.928 "w_mbytes_per_sec": 0 00:14:06.928 }, 00:14:06.928 "claimed": true, 00:14:06.928 "claim_type": "exclusive_write", 00:14:06.928 "zoned": false, 00:14:06.928 "supported_io_types": { 00:14:06.928 "read": true, 00:14:06.928 "write": true, 00:14:06.928 "unmap": true, 00:14:06.928 "flush": true, 00:14:06.928 "reset": true, 00:14:06.928 "nvme_admin": false, 00:14:06.928 "nvme_io": false, 00:14:06.928 "nvme_io_md": false, 00:14:06.928 "write_zeroes": true, 00:14:06.928 "zcopy": true, 00:14:06.928 "get_zone_info": false, 00:14:06.928 "zone_management": false, 00:14:06.928 "zone_append": false, 00:14:06.928 "compare": false, 00:14:06.928 "compare_and_write": false, 00:14:06.928 "abort": true, 00:14:06.928 "seek_hole": false, 00:14:06.928 "seek_data": false, 00:14:06.928 "copy": true, 00:14:06.928 "nvme_iov_md": false 00:14:06.928 }, 00:14:06.928 "memory_domains": [ 00:14:06.928 { 00:14:06.928 "dma_device_id": "system", 00:14:06.928 "dma_device_type": 1 00:14:06.928 }, 00:14:06.928 { 00:14:06.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.928 "dma_device_type": 2 00:14:06.929 } 00:14:06.929 ], 00:14:06.929 "driver_specific": {} 00:14:06.929 } 00:14:06.929 ] 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.929 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.186 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.186 "name": "Existed_Raid", 00:14:07.186 "uuid": "0721e0bf-47d3-4ce4-8786-cd5350c1f309", 00:14:07.186 "strip_size_kb": 64, 00:14:07.186 "state": "online", 00:14:07.186 "raid_level": "concat", 00:14:07.186 "superblock": false, 00:14:07.186 "num_base_bdevs": 4, 00:14:07.186 "num_base_bdevs_discovered": 4, 00:14:07.186 "num_base_bdevs_operational": 4, 00:14:07.186 "base_bdevs_list": [ 00:14:07.186 { 00:14:07.186 "name": "NewBaseBdev", 00:14:07.186 "uuid": "3f88a879-49e5-4a09-a0e7-491ecdc172d3", 00:14:07.186 "is_configured": true, 00:14:07.186 "data_offset": 0, 00:14:07.186 "data_size": 65536 00:14:07.186 }, 00:14:07.186 { 00:14:07.186 "name": "BaseBdev2", 00:14:07.186 "uuid": "4f9985ee-6d5e-4a0d-98b3-3eb866fa716f", 00:14:07.186 "is_configured": true, 00:14:07.186 "data_offset": 0, 00:14:07.186 "data_size": 65536 00:14:07.186 }, 00:14:07.186 { 00:14:07.186 "name": "BaseBdev3", 00:14:07.186 "uuid": "97e2d073-7582-452f-a34b-e90f28a1cba0", 00:14:07.186 "is_configured": true, 00:14:07.186 "data_offset": 0, 00:14:07.186 "data_size": 65536 00:14:07.186 }, 00:14:07.186 { 00:14:07.186 "name": "BaseBdev4", 00:14:07.186 "uuid": "04062e65-3245-4ad9-8cce-15c179f05eb9", 00:14:07.186 "is_configured": true, 00:14:07.186 "data_offset": 0, 00:14:07.186 "data_size": 65536 00:14:07.186 } 00:14:07.186 ] 00:14:07.186 }' 00:14:07.186 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.186 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.445 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:07.445 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:07.445 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:07.445 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:07.445 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:07.445 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:07.445 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:07.445 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.445 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.445 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:07.445 [2024-11-27 08:46:04.166142] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.445 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:07.704 "name": "Existed_Raid", 00:14:07.704 "aliases": [ 00:14:07.704 "0721e0bf-47d3-4ce4-8786-cd5350c1f309" 00:14:07.704 ], 00:14:07.704 "product_name": "Raid Volume", 00:14:07.704 "block_size": 512, 00:14:07.704 "num_blocks": 262144, 00:14:07.704 "uuid": "0721e0bf-47d3-4ce4-8786-cd5350c1f309", 00:14:07.704 "assigned_rate_limits": { 00:14:07.704 "rw_ios_per_sec": 0, 00:14:07.704 "rw_mbytes_per_sec": 0, 00:14:07.704 "r_mbytes_per_sec": 0, 00:14:07.704 "w_mbytes_per_sec": 0 00:14:07.704 }, 00:14:07.704 "claimed": false, 00:14:07.704 "zoned": false, 00:14:07.704 "supported_io_types": { 00:14:07.704 "read": true, 00:14:07.704 "write": true, 00:14:07.704 "unmap": true, 00:14:07.704 "flush": true, 00:14:07.704 "reset": true, 00:14:07.704 "nvme_admin": false, 00:14:07.704 "nvme_io": false, 00:14:07.704 "nvme_io_md": false, 00:14:07.704 "write_zeroes": true, 00:14:07.704 "zcopy": false, 00:14:07.704 "get_zone_info": false, 00:14:07.704 "zone_management": false, 00:14:07.704 "zone_append": false, 00:14:07.704 "compare": false, 00:14:07.704 "compare_and_write": false, 00:14:07.704 "abort": false, 00:14:07.704 "seek_hole": false, 00:14:07.704 "seek_data": false, 00:14:07.704 "copy": false, 00:14:07.704 "nvme_iov_md": false 00:14:07.704 }, 00:14:07.704 "memory_domains": [ 00:14:07.704 { 00:14:07.704 "dma_device_id": "system", 00:14:07.704 "dma_device_type": 1 00:14:07.704 }, 00:14:07.704 { 00:14:07.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.704 "dma_device_type": 2 00:14:07.704 }, 00:14:07.704 { 00:14:07.704 "dma_device_id": "system", 00:14:07.704 "dma_device_type": 1 00:14:07.704 }, 00:14:07.704 { 00:14:07.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.704 "dma_device_type": 2 00:14:07.704 }, 00:14:07.704 { 00:14:07.704 "dma_device_id": "system", 00:14:07.704 "dma_device_type": 1 00:14:07.704 }, 00:14:07.704 { 00:14:07.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.704 "dma_device_type": 2 00:14:07.704 }, 00:14:07.704 { 00:14:07.704 "dma_device_id": "system", 00:14:07.704 "dma_device_type": 1 00:14:07.704 }, 00:14:07.704 { 00:14:07.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.704 "dma_device_type": 2 00:14:07.704 } 00:14:07.704 ], 00:14:07.704 "driver_specific": { 00:14:07.704 "raid": { 00:14:07.704 "uuid": "0721e0bf-47d3-4ce4-8786-cd5350c1f309", 00:14:07.704 "strip_size_kb": 64, 00:14:07.704 "state": "online", 00:14:07.704 "raid_level": "concat", 00:14:07.704 "superblock": false, 00:14:07.704 "num_base_bdevs": 4, 00:14:07.704 "num_base_bdevs_discovered": 4, 00:14:07.704 "num_base_bdevs_operational": 4, 00:14:07.704 "base_bdevs_list": [ 00:14:07.704 { 00:14:07.704 "name": "NewBaseBdev", 00:14:07.704 "uuid": "3f88a879-49e5-4a09-a0e7-491ecdc172d3", 00:14:07.704 "is_configured": true, 00:14:07.704 "data_offset": 0, 00:14:07.704 "data_size": 65536 00:14:07.704 }, 00:14:07.704 { 00:14:07.704 "name": "BaseBdev2", 00:14:07.704 "uuid": "4f9985ee-6d5e-4a0d-98b3-3eb866fa716f", 00:14:07.704 "is_configured": true, 00:14:07.704 "data_offset": 0, 00:14:07.704 "data_size": 65536 00:14:07.704 }, 00:14:07.704 { 00:14:07.704 "name": "BaseBdev3", 00:14:07.704 "uuid": "97e2d073-7582-452f-a34b-e90f28a1cba0", 00:14:07.704 "is_configured": true, 00:14:07.704 "data_offset": 0, 00:14:07.704 "data_size": 65536 00:14:07.704 }, 00:14:07.704 { 00:14:07.704 "name": "BaseBdev4", 00:14:07.704 "uuid": "04062e65-3245-4ad9-8cce-15c179f05eb9", 00:14:07.704 "is_configured": true, 00:14:07.704 "data_offset": 0, 00:14:07.704 "data_size": 65536 00:14:07.704 } 00:14:07.704 ] 00:14:07.704 } 00:14:07.704 } 00:14:07.704 }' 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:07.704 BaseBdev2 00:14:07.704 BaseBdev3 00:14:07.704 BaseBdev4' 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.704 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.705 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:07.705 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.705 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.705 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.964 [2024-11-27 08:46:04.517732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:07.964 [2024-11-27 08:46:04.517910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.964 [2024-11-27 08:46:04.518060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.964 [2024-11-27 08:46:04.518169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.964 [2024-11-27 08:46:04.518187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71545 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' -z 71545 ']' 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # kill -0 71545 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # uname 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 71545 00:14:07.964 killing process with pid 71545 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 71545' 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # kill 71545 00:14:07.964 [2024-11-27 08:46:04.557560] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:07.964 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@975 -- # wait 71545 00:14:08.231 [2024-11-27 08:46:04.926746] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.609 ************************************ 00:14:09.609 END TEST raid_state_function_test 00:14:09.609 ************************************ 00:14:09.609 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:09.609 00:14:09.609 real 0m12.923s 00:14:09.609 user 0m21.236s 00:14:09.609 sys 0m1.876s 00:14:09.609 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:14:09.609 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.609 08:46:06 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:14:09.609 08:46:06 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:14:09.609 08:46:06 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:14:09.609 08:46:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:09.610 ************************************ 00:14:09.610 START TEST raid_state_function_test_sb 00:14:09.610 ************************************ 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # raid_state_function_test concat 4 true 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:09.610 Process raid pid: 72234 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72234 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72234' 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72234 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # '[' -z 72234 ']' 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local max_retries=100 00:14:09.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # xtrace_disable 00:14:09.610 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.610 [2024-11-27 08:46:06.212846] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:14:09.610 [2024-11-27 08:46:06.213053] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.868 [2024-11-27 08:46:06.401054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.868 [2024-11-27 08:46:06.548408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.126 [2024-11-27 08:46:06.777174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.126 [2024-11-27 08:46:06.777235] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.723 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:14:10.723 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@865 -- # return 0 00:14:10.723 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:10.723 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.723 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.723 [2024-11-27 08:46:07.269152] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:10.723 [2024-11-27 08:46:07.269225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:10.723 [2024-11-27 08:46:07.269245] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.723 [2024-11-27 08:46:07.269262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.724 [2024-11-27 08:46:07.269273] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:10.724 [2024-11-27 08:46:07.269287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:10.724 [2024-11-27 08:46:07.269297] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:10.724 [2024-11-27 08:46:07.269311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.724 "name": "Existed_Raid", 00:14:10.724 "uuid": "a5fc491f-26cc-493c-9c05-cc3d777d48a2", 00:14:10.724 "strip_size_kb": 64, 00:14:10.724 "state": "configuring", 00:14:10.724 "raid_level": "concat", 00:14:10.724 "superblock": true, 00:14:10.724 "num_base_bdevs": 4, 00:14:10.724 "num_base_bdevs_discovered": 0, 00:14:10.724 "num_base_bdevs_operational": 4, 00:14:10.724 "base_bdevs_list": [ 00:14:10.724 { 00:14:10.724 "name": "BaseBdev1", 00:14:10.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.724 "is_configured": false, 00:14:10.724 "data_offset": 0, 00:14:10.724 "data_size": 0 00:14:10.724 }, 00:14:10.724 { 00:14:10.724 "name": "BaseBdev2", 00:14:10.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.724 "is_configured": false, 00:14:10.724 "data_offset": 0, 00:14:10.724 "data_size": 0 00:14:10.724 }, 00:14:10.724 { 00:14:10.724 "name": "BaseBdev3", 00:14:10.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.724 "is_configured": false, 00:14:10.724 "data_offset": 0, 00:14:10.724 "data_size": 0 00:14:10.724 }, 00:14:10.724 { 00:14:10.724 "name": "BaseBdev4", 00:14:10.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.724 "is_configured": false, 00:14:10.724 "data_offset": 0, 00:14:10.724 "data_size": 0 00:14:10.724 } 00:14:10.724 ] 00:14:10.724 }' 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.724 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.293 [2024-11-27 08:46:07.777213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:11.293 [2024-11-27 08:46:07.777427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.293 [2024-11-27 08:46:07.789193] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:11.293 [2024-11-27 08:46:07.789261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:11.293 [2024-11-27 08:46:07.789293] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:11.293 [2024-11-27 08:46:07.789309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:11.293 [2024-11-27 08:46:07.789326] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:11.293 [2024-11-27 08:46:07.789356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:11.293 [2024-11-27 08:46:07.789381] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:11.293 [2024-11-27 08:46:07.789413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.293 [2024-11-27 08:46:07.838200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.293 BaseBdev1 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.293 [ 00:14:11.293 { 00:14:11.293 "name": "BaseBdev1", 00:14:11.293 "aliases": [ 00:14:11.293 "d1ee4373-e303-450e-a646-f07da059d071" 00:14:11.293 ], 00:14:11.293 "product_name": "Malloc disk", 00:14:11.293 "block_size": 512, 00:14:11.293 "num_blocks": 65536, 00:14:11.293 "uuid": "d1ee4373-e303-450e-a646-f07da059d071", 00:14:11.293 "assigned_rate_limits": { 00:14:11.293 "rw_ios_per_sec": 0, 00:14:11.293 "rw_mbytes_per_sec": 0, 00:14:11.293 "r_mbytes_per_sec": 0, 00:14:11.293 "w_mbytes_per_sec": 0 00:14:11.293 }, 00:14:11.293 "claimed": true, 00:14:11.293 "claim_type": "exclusive_write", 00:14:11.293 "zoned": false, 00:14:11.293 "supported_io_types": { 00:14:11.293 "read": true, 00:14:11.293 "write": true, 00:14:11.293 "unmap": true, 00:14:11.293 "flush": true, 00:14:11.293 "reset": true, 00:14:11.293 "nvme_admin": false, 00:14:11.293 "nvme_io": false, 00:14:11.293 "nvme_io_md": false, 00:14:11.293 "write_zeroes": true, 00:14:11.293 "zcopy": true, 00:14:11.293 "get_zone_info": false, 00:14:11.293 "zone_management": false, 00:14:11.293 "zone_append": false, 00:14:11.293 "compare": false, 00:14:11.293 "compare_and_write": false, 00:14:11.293 "abort": true, 00:14:11.293 "seek_hole": false, 00:14:11.293 "seek_data": false, 00:14:11.293 "copy": true, 00:14:11.293 "nvme_iov_md": false 00:14:11.293 }, 00:14:11.293 "memory_domains": [ 00:14:11.293 { 00:14:11.293 "dma_device_id": "system", 00:14:11.293 "dma_device_type": 1 00:14:11.293 }, 00:14:11.293 { 00:14:11.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.293 "dma_device_type": 2 00:14:11.293 } 00:14:11.293 ], 00:14:11.293 "driver_specific": {} 00:14:11.293 } 00:14:11.293 ] 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.293 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.293 "name": "Existed_Raid", 00:14:11.293 "uuid": "c3d4f645-2d71-4969-ac80-d921dd012dc7", 00:14:11.293 "strip_size_kb": 64, 00:14:11.293 "state": "configuring", 00:14:11.293 "raid_level": "concat", 00:14:11.293 "superblock": true, 00:14:11.293 "num_base_bdevs": 4, 00:14:11.293 "num_base_bdevs_discovered": 1, 00:14:11.293 "num_base_bdevs_operational": 4, 00:14:11.293 "base_bdevs_list": [ 00:14:11.293 { 00:14:11.293 "name": "BaseBdev1", 00:14:11.293 "uuid": "d1ee4373-e303-450e-a646-f07da059d071", 00:14:11.293 "is_configured": true, 00:14:11.293 "data_offset": 2048, 00:14:11.293 "data_size": 63488 00:14:11.293 }, 00:14:11.293 { 00:14:11.293 "name": "BaseBdev2", 00:14:11.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.294 "is_configured": false, 00:14:11.294 "data_offset": 0, 00:14:11.294 "data_size": 0 00:14:11.294 }, 00:14:11.294 { 00:14:11.294 "name": "BaseBdev3", 00:14:11.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.294 "is_configured": false, 00:14:11.294 "data_offset": 0, 00:14:11.294 "data_size": 0 00:14:11.294 }, 00:14:11.294 { 00:14:11.294 "name": "BaseBdev4", 00:14:11.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.294 "is_configured": false, 00:14:11.294 "data_offset": 0, 00:14:11.294 "data_size": 0 00:14:11.294 } 00:14:11.294 ] 00:14:11.294 }' 00:14:11.294 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.294 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.862 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:11.862 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.862 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.862 [2024-11-27 08:46:08.394450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:11.862 [2024-11-27 08:46:08.394663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:11.862 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.862 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:11.862 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.862 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.862 [2024-11-27 08:46:08.406506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.862 [2024-11-27 08:46:08.409274] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:11.862 [2024-11-27 08:46:08.409463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:11.863 [2024-11-27 08:46:08.409584] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:11.863 [2024-11-27 08:46:08.409646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:11.863 [2024-11-27 08:46:08.409789] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:11.863 [2024-11-27 08:46:08.409853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.863 "name": "Existed_Raid", 00:14:11.863 "uuid": "25838b29-5786-497e-9cb3-466b8bf2f56c", 00:14:11.863 "strip_size_kb": 64, 00:14:11.863 "state": "configuring", 00:14:11.863 "raid_level": "concat", 00:14:11.863 "superblock": true, 00:14:11.863 "num_base_bdevs": 4, 00:14:11.863 "num_base_bdevs_discovered": 1, 00:14:11.863 "num_base_bdevs_operational": 4, 00:14:11.863 "base_bdevs_list": [ 00:14:11.863 { 00:14:11.863 "name": "BaseBdev1", 00:14:11.863 "uuid": "d1ee4373-e303-450e-a646-f07da059d071", 00:14:11.863 "is_configured": true, 00:14:11.863 "data_offset": 2048, 00:14:11.863 "data_size": 63488 00:14:11.863 }, 00:14:11.863 { 00:14:11.863 "name": "BaseBdev2", 00:14:11.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.863 "is_configured": false, 00:14:11.863 "data_offset": 0, 00:14:11.863 "data_size": 0 00:14:11.863 }, 00:14:11.863 { 00:14:11.863 "name": "BaseBdev3", 00:14:11.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.863 "is_configured": false, 00:14:11.863 "data_offset": 0, 00:14:11.863 "data_size": 0 00:14:11.863 }, 00:14:11.863 { 00:14:11.863 "name": "BaseBdev4", 00:14:11.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.863 "is_configured": false, 00:14:11.863 "data_offset": 0, 00:14:11.863 "data_size": 0 00:14:11.863 } 00:14:11.863 ] 00:14:11.863 }' 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.863 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.430 [2024-11-27 08:46:08.973071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.430 BaseBdev2 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.430 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.430 [ 00:14:12.430 { 00:14:12.430 "name": "BaseBdev2", 00:14:12.430 "aliases": [ 00:14:12.430 "327dfac1-878a-4af7-86aa-ff3a391c53fd" 00:14:12.430 ], 00:14:12.430 "product_name": "Malloc disk", 00:14:12.430 "block_size": 512, 00:14:12.430 "num_blocks": 65536, 00:14:12.430 "uuid": "327dfac1-878a-4af7-86aa-ff3a391c53fd", 00:14:12.430 "assigned_rate_limits": { 00:14:12.430 "rw_ios_per_sec": 0, 00:14:12.430 "rw_mbytes_per_sec": 0, 00:14:12.430 "r_mbytes_per_sec": 0, 00:14:12.430 "w_mbytes_per_sec": 0 00:14:12.430 }, 00:14:12.430 "claimed": true, 00:14:12.430 "claim_type": "exclusive_write", 00:14:12.430 "zoned": false, 00:14:12.430 "supported_io_types": { 00:14:12.430 "read": true, 00:14:12.430 "write": true, 00:14:12.430 "unmap": true, 00:14:12.430 "flush": true, 00:14:12.430 "reset": true, 00:14:12.430 "nvme_admin": false, 00:14:12.430 "nvme_io": false, 00:14:12.430 "nvme_io_md": false, 00:14:12.430 "write_zeroes": true, 00:14:12.430 "zcopy": true, 00:14:12.430 "get_zone_info": false, 00:14:12.430 "zone_management": false, 00:14:12.430 "zone_append": false, 00:14:12.430 "compare": false, 00:14:12.430 "compare_and_write": false, 00:14:12.430 "abort": true, 00:14:12.430 "seek_hole": false, 00:14:12.430 "seek_data": false, 00:14:12.430 "copy": true, 00:14:12.430 "nvme_iov_md": false 00:14:12.430 }, 00:14:12.430 "memory_domains": [ 00:14:12.430 { 00:14:12.430 "dma_device_id": "system", 00:14:12.430 "dma_device_type": 1 00:14:12.430 }, 00:14:12.430 { 00:14:12.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.430 "dma_device_type": 2 00:14:12.430 } 00:14:12.430 ], 00:14:12.430 "driver_specific": {} 00:14:12.430 } 00:14:12.430 ] 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.430 "name": "Existed_Raid", 00:14:12.430 "uuid": "25838b29-5786-497e-9cb3-466b8bf2f56c", 00:14:12.430 "strip_size_kb": 64, 00:14:12.430 "state": "configuring", 00:14:12.430 "raid_level": "concat", 00:14:12.430 "superblock": true, 00:14:12.430 "num_base_bdevs": 4, 00:14:12.430 "num_base_bdevs_discovered": 2, 00:14:12.430 "num_base_bdevs_operational": 4, 00:14:12.430 "base_bdevs_list": [ 00:14:12.430 { 00:14:12.430 "name": "BaseBdev1", 00:14:12.430 "uuid": "d1ee4373-e303-450e-a646-f07da059d071", 00:14:12.430 "is_configured": true, 00:14:12.430 "data_offset": 2048, 00:14:12.430 "data_size": 63488 00:14:12.430 }, 00:14:12.430 { 00:14:12.430 "name": "BaseBdev2", 00:14:12.430 "uuid": "327dfac1-878a-4af7-86aa-ff3a391c53fd", 00:14:12.430 "is_configured": true, 00:14:12.430 "data_offset": 2048, 00:14:12.430 "data_size": 63488 00:14:12.430 }, 00:14:12.430 { 00:14:12.430 "name": "BaseBdev3", 00:14:12.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.430 "is_configured": false, 00:14:12.430 "data_offset": 0, 00:14:12.430 "data_size": 0 00:14:12.430 }, 00:14:12.430 { 00:14:12.430 "name": "BaseBdev4", 00:14:12.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.430 "is_configured": false, 00:14:12.430 "data_offset": 0, 00:14:12.430 "data_size": 0 00:14:12.430 } 00:14:12.430 ] 00:14:12.430 }' 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.430 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.009 [2024-11-27 08:46:09.552801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.009 BaseBdev3 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.009 [ 00:14:13.009 { 00:14:13.009 "name": "BaseBdev3", 00:14:13.009 "aliases": [ 00:14:13.009 "7867181f-b92a-4efb-9f05-140a94d0326e" 00:14:13.009 ], 00:14:13.009 "product_name": "Malloc disk", 00:14:13.009 "block_size": 512, 00:14:13.009 "num_blocks": 65536, 00:14:13.009 "uuid": "7867181f-b92a-4efb-9f05-140a94d0326e", 00:14:13.009 "assigned_rate_limits": { 00:14:13.009 "rw_ios_per_sec": 0, 00:14:13.009 "rw_mbytes_per_sec": 0, 00:14:13.009 "r_mbytes_per_sec": 0, 00:14:13.009 "w_mbytes_per_sec": 0 00:14:13.009 }, 00:14:13.009 "claimed": true, 00:14:13.009 "claim_type": "exclusive_write", 00:14:13.009 "zoned": false, 00:14:13.009 "supported_io_types": { 00:14:13.009 "read": true, 00:14:13.009 "write": true, 00:14:13.009 "unmap": true, 00:14:13.009 "flush": true, 00:14:13.009 "reset": true, 00:14:13.009 "nvme_admin": false, 00:14:13.009 "nvme_io": false, 00:14:13.009 "nvme_io_md": false, 00:14:13.009 "write_zeroes": true, 00:14:13.009 "zcopy": true, 00:14:13.009 "get_zone_info": false, 00:14:13.009 "zone_management": false, 00:14:13.009 "zone_append": false, 00:14:13.009 "compare": false, 00:14:13.009 "compare_and_write": false, 00:14:13.009 "abort": true, 00:14:13.009 "seek_hole": false, 00:14:13.009 "seek_data": false, 00:14:13.009 "copy": true, 00:14:13.009 "nvme_iov_md": false 00:14:13.009 }, 00:14:13.009 "memory_domains": [ 00:14:13.009 { 00:14:13.009 "dma_device_id": "system", 00:14:13.009 "dma_device_type": 1 00:14:13.009 }, 00:14:13.009 { 00:14:13.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.009 "dma_device_type": 2 00:14:13.009 } 00:14:13.009 ], 00:14:13.009 "driver_specific": {} 00:14:13.009 } 00:14:13.009 ] 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.009 "name": "Existed_Raid", 00:14:13.009 "uuid": "25838b29-5786-497e-9cb3-466b8bf2f56c", 00:14:13.009 "strip_size_kb": 64, 00:14:13.009 "state": "configuring", 00:14:13.009 "raid_level": "concat", 00:14:13.009 "superblock": true, 00:14:13.009 "num_base_bdevs": 4, 00:14:13.009 "num_base_bdevs_discovered": 3, 00:14:13.009 "num_base_bdevs_operational": 4, 00:14:13.009 "base_bdevs_list": [ 00:14:13.009 { 00:14:13.009 "name": "BaseBdev1", 00:14:13.009 "uuid": "d1ee4373-e303-450e-a646-f07da059d071", 00:14:13.009 "is_configured": true, 00:14:13.009 "data_offset": 2048, 00:14:13.009 "data_size": 63488 00:14:13.009 }, 00:14:13.009 { 00:14:13.009 "name": "BaseBdev2", 00:14:13.009 "uuid": "327dfac1-878a-4af7-86aa-ff3a391c53fd", 00:14:13.009 "is_configured": true, 00:14:13.009 "data_offset": 2048, 00:14:13.009 "data_size": 63488 00:14:13.009 }, 00:14:13.009 { 00:14:13.009 "name": "BaseBdev3", 00:14:13.009 "uuid": "7867181f-b92a-4efb-9f05-140a94d0326e", 00:14:13.009 "is_configured": true, 00:14:13.009 "data_offset": 2048, 00:14:13.009 "data_size": 63488 00:14:13.009 }, 00:14:13.009 { 00:14:13.009 "name": "BaseBdev4", 00:14:13.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.009 "is_configured": false, 00:14:13.009 "data_offset": 0, 00:14:13.009 "data_size": 0 00:14:13.009 } 00:14:13.009 ] 00:14:13.009 }' 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.009 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.576 [2024-11-27 08:46:10.122779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:13.576 [2024-11-27 08:46:10.123309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:13.576 [2024-11-27 08:46:10.123357] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:13.576 BaseBdev4 00:14:13.576 [2024-11-27 08:46:10.123713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:13.576 [2024-11-27 08:46:10.123918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:13.576 [2024-11-27 08:46:10.123947] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:13.576 [2024-11-27 08:46:10.124134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.576 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.576 [ 00:14:13.576 { 00:14:13.576 "name": "BaseBdev4", 00:14:13.576 "aliases": [ 00:14:13.576 "096e0904-c340-4178-b623-0bc0ee1e40e7" 00:14:13.576 ], 00:14:13.576 "product_name": "Malloc disk", 00:14:13.576 "block_size": 512, 00:14:13.576 "num_blocks": 65536, 00:14:13.577 "uuid": "096e0904-c340-4178-b623-0bc0ee1e40e7", 00:14:13.577 "assigned_rate_limits": { 00:14:13.577 "rw_ios_per_sec": 0, 00:14:13.577 "rw_mbytes_per_sec": 0, 00:14:13.577 "r_mbytes_per_sec": 0, 00:14:13.577 "w_mbytes_per_sec": 0 00:14:13.577 }, 00:14:13.577 "claimed": true, 00:14:13.577 "claim_type": "exclusive_write", 00:14:13.577 "zoned": false, 00:14:13.577 "supported_io_types": { 00:14:13.577 "read": true, 00:14:13.577 "write": true, 00:14:13.577 "unmap": true, 00:14:13.577 "flush": true, 00:14:13.577 "reset": true, 00:14:13.577 "nvme_admin": false, 00:14:13.577 "nvme_io": false, 00:14:13.577 "nvme_io_md": false, 00:14:13.577 "write_zeroes": true, 00:14:13.577 "zcopy": true, 00:14:13.577 "get_zone_info": false, 00:14:13.577 "zone_management": false, 00:14:13.577 "zone_append": false, 00:14:13.577 "compare": false, 00:14:13.577 "compare_and_write": false, 00:14:13.577 "abort": true, 00:14:13.577 "seek_hole": false, 00:14:13.577 "seek_data": false, 00:14:13.577 "copy": true, 00:14:13.577 "nvme_iov_md": false 00:14:13.577 }, 00:14:13.577 "memory_domains": [ 00:14:13.577 { 00:14:13.577 "dma_device_id": "system", 00:14:13.577 "dma_device_type": 1 00:14:13.577 }, 00:14:13.577 { 00:14:13.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.577 "dma_device_type": 2 00:14:13.577 } 00:14:13.577 ], 00:14:13.577 "driver_specific": {} 00:14:13.577 } 00:14:13.577 ] 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.577 "name": "Existed_Raid", 00:14:13.577 "uuid": "25838b29-5786-497e-9cb3-466b8bf2f56c", 00:14:13.577 "strip_size_kb": 64, 00:14:13.577 "state": "online", 00:14:13.577 "raid_level": "concat", 00:14:13.577 "superblock": true, 00:14:13.577 "num_base_bdevs": 4, 00:14:13.577 "num_base_bdevs_discovered": 4, 00:14:13.577 "num_base_bdevs_operational": 4, 00:14:13.577 "base_bdevs_list": [ 00:14:13.577 { 00:14:13.577 "name": "BaseBdev1", 00:14:13.577 "uuid": "d1ee4373-e303-450e-a646-f07da059d071", 00:14:13.577 "is_configured": true, 00:14:13.577 "data_offset": 2048, 00:14:13.577 "data_size": 63488 00:14:13.577 }, 00:14:13.577 { 00:14:13.577 "name": "BaseBdev2", 00:14:13.577 "uuid": "327dfac1-878a-4af7-86aa-ff3a391c53fd", 00:14:13.577 "is_configured": true, 00:14:13.577 "data_offset": 2048, 00:14:13.577 "data_size": 63488 00:14:13.577 }, 00:14:13.577 { 00:14:13.577 "name": "BaseBdev3", 00:14:13.577 "uuid": "7867181f-b92a-4efb-9f05-140a94d0326e", 00:14:13.577 "is_configured": true, 00:14:13.577 "data_offset": 2048, 00:14:13.577 "data_size": 63488 00:14:13.577 }, 00:14:13.577 { 00:14:13.577 "name": "BaseBdev4", 00:14:13.577 "uuid": "096e0904-c340-4178-b623-0bc0ee1e40e7", 00:14:13.577 "is_configured": true, 00:14:13.577 "data_offset": 2048, 00:14:13.577 "data_size": 63488 00:14:13.577 } 00:14:13.577 ] 00:14:13.577 }' 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.577 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.144 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:14.144 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:14.144 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:14.144 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:14.144 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:14.144 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:14.144 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:14.144 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.144 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.144 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:14.144 [2024-11-27 08:46:10.727524] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.144 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.144 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:14.144 "name": "Existed_Raid", 00:14:14.144 "aliases": [ 00:14:14.144 "25838b29-5786-497e-9cb3-466b8bf2f56c" 00:14:14.144 ], 00:14:14.144 "product_name": "Raid Volume", 00:14:14.144 "block_size": 512, 00:14:14.144 "num_blocks": 253952, 00:14:14.144 "uuid": "25838b29-5786-497e-9cb3-466b8bf2f56c", 00:14:14.144 "assigned_rate_limits": { 00:14:14.144 "rw_ios_per_sec": 0, 00:14:14.144 "rw_mbytes_per_sec": 0, 00:14:14.144 "r_mbytes_per_sec": 0, 00:14:14.144 "w_mbytes_per_sec": 0 00:14:14.144 }, 00:14:14.144 "claimed": false, 00:14:14.144 "zoned": false, 00:14:14.144 "supported_io_types": { 00:14:14.144 "read": true, 00:14:14.144 "write": true, 00:14:14.144 "unmap": true, 00:14:14.144 "flush": true, 00:14:14.144 "reset": true, 00:14:14.144 "nvme_admin": false, 00:14:14.144 "nvme_io": false, 00:14:14.144 "nvme_io_md": false, 00:14:14.144 "write_zeroes": true, 00:14:14.144 "zcopy": false, 00:14:14.144 "get_zone_info": false, 00:14:14.144 "zone_management": false, 00:14:14.144 "zone_append": false, 00:14:14.144 "compare": false, 00:14:14.144 "compare_and_write": false, 00:14:14.144 "abort": false, 00:14:14.144 "seek_hole": false, 00:14:14.144 "seek_data": false, 00:14:14.144 "copy": false, 00:14:14.144 "nvme_iov_md": false 00:14:14.144 }, 00:14:14.144 "memory_domains": [ 00:14:14.144 { 00:14:14.144 "dma_device_id": "system", 00:14:14.144 "dma_device_type": 1 00:14:14.144 }, 00:14:14.144 { 00:14:14.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.144 "dma_device_type": 2 00:14:14.144 }, 00:14:14.144 { 00:14:14.144 "dma_device_id": "system", 00:14:14.144 "dma_device_type": 1 00:14:14.144 }, 00:14:14.144 { 00:14:14.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.144 "dma_device_type": 2 00:14:14.144 }, 00:14:14.144 { 00:14:14.144 "dma_device_id": "system", 00:14:14.144 "dma_device_type": 1 00:14:14.144 }, 00:14:14.144 { 00:14:14.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.144 "dma_device_type": 2 00:14:14.144 }, 00:14:14.144 { 00:14:14.144 "dma_device_id": "system", 00:14:14.144 "dma_device_type": 1 00:14:14.144 }, 00:14:14.144 { 00:14:14.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.144 "dma_device_type": 2 00:14:14.144 } 00:14:14.144 ], 00:14:14.144 "driver_specific": { 00:14:14.144 "raid": { 00:14:14.144 "uuid": "25838b29-5786-497e-9cb3-466b8bf2f56c", 00:14:14.144 "strip_size_kb": 64, 00:14:14.144 "state": "online", 00:14:14.144 "raid_level": "concat", 00:14:14.144 "superblock": true, 00:14:14.145 "num_base_bdevs": 4, 00:14:14.145 "num_base_bdevs_discovered": 4, 00:14:14.145 "num_base_bdevs_operational": 4, 00:14:14.145 "base_bdevs_list": [ 00:14:14.145 { 00:14:14.145 "name": "BaseBdev1", 00:14:14.145 "uuid": "d1ee4373-e303-450e-a646-f07da059d071", 00:14:14.145 "is_configured": true, 00:14:14.145 "data_offset": 2048, 00:14:14.145 "data_size": 63488 00:14:14.145 }, 00:14:14.145 { 00:14:14.145 "name": "BaseBdev2", 00:14:14.145 "uuid": "327dfac1-878a-4af7-86aa-ff3a391c53fd", 00:14:14.145 "is_configured": true, 00:14:14.145 "data_offset": 2048, 00:14:14.145 "data_size": 63488 00:14:14.145 }, 00:14:14.145 { 00:14:14.145 "name": "BaseBdev3", 00:14:14.145 "uuid": "7867181f-b92a-4efb-9f05-140a94d0326e", 00:14:14.145 "is_configured": true, 00:14:14.145 "data_offset": 2048, 00:14:14.145 "data_size": 63488 00:14:14.145 }, 00:14:14.145 { 00:14:14.145 "name": "BaseBdev4", 00:14:14.145 "uuid": "096e0904-c340-4178-b623-0bc0ee1e40e7", 00:14:14.145 "is_configured": true, 00:14:14.145 "data_offset": 2048, 00:14:14.145 "data_size": 63488 00:14:14.145 } 00:14:14.145 ] 00:14:14.145 } 00:14:14.145 } 00:14:14.145 }' 00:14:14.145 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:14.145 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:14.145 BaseBdev2 00:14:14.145 BaseBdev3 00:14:14.145 BaseBdev4' 00:14:14.145 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.145 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:14.145 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.145 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:14.145 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.145 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.145 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.145 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.403 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.403 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.403 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.403 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.403 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:14.403 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.403 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.403 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.403 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.403 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.403 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.403 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:14.403 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.403 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.403 [2024-11-27 08:46:11.095228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.403 [2024-11-27 08:46:11.095422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:14.403 [2024-11-27 08:46:11.095526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.661 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.661 "name": "Existed_Raid", 00:14:14.661 "uuid": "25838b29-5786-497e-9cb3-466b8bf2f56c", 00:14:14.661 "strip_size_kb": 64, 00:14:14.661 "state": "offline", 00:14:14.661 "raid_level": "concat", 00:14:14.661 "superblock": true, 00:14:14.661 "num_base_bdevs": 4, 00:14:14.661 "num_base_bdevs_discovered": 3, 00:14:14.661 "num_base_bdevs_operational": 3, 00:14:14.661 "base_bdevs_list": [ 00:14:14.661 { 00:14:14.661 "name": null, 00:14:14.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.661 "is_configured": false, 00:14:14.661 "data_offset": 0, 00:14:14.661 "data_size": 63488 00:14:14.661 }, 00:14:14.661 { 00:14:14.661 "name": "BaseBdev2", 00:14:14.661 "uuid": "327dfac1-878a-4af7-86aa-ff3a391c53fd", 00:14:14.661 "is_configured": true, 00:14:14.661 "data_offset": 2048, 00:14:14.661 "data_size": 63488 00:14:14.661 }, 00:14:14.661 { 00:14:14.661 "name": "BaseBdev3", 00:14:14.662 "uuid": "7867181f-b92a-4efb-9f05-140a94d0326e", 00:14:14.662 "is_configured": true, 00:14:14.662 "data_offset": 2048, 00:14:14.662 "data_size": 63488 00:14:14.662 }, 00:14:14.662 { 00:14:14.662 "name": "BaseBdev4", 00:14:14.662 "uuid": "096e0904-c340-4178-b623-0bc0ee1e40e7", 00:14:14.662 "is_configured": true, 00:14:14.662 "data_offset": 2048, 00:14:14.662 "data_size": 63488 00:14:14.662 } 00:14:14.662 ] 00:14:14.662 }' 00:14:14.662 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.662 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.235 [2024-11-27 08:46:11.770169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.235 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.235 [2024-11-27 08:46:11.937410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.495 [2024-11-27 08:46:12.089459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:15.495 [2024-11-27 08:46:12.089541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.495 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.754 BaseBdev2 00:14:15.754 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.754 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:15.754 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:14:15.754 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:15.754 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:15.754 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:15.754 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:15.754 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:15.754 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.754 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.754 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.754 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:15.754 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.754 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.754 [ 00:14:15.754 { 00:14:15.754 "name": "BaseBdev2", 00:14:15.754 "aliases": [ 00:14:15.754 "0fe7bbf6-6629-47a9-898e-69371fa11e8b" 00:14:15.754 ], 00:14:15.754 "product_name": "Malloc disk", 00:14:15.754 "block_size": 512, 00:14:15.754 "num_blocks": 65536, 00:14:15.754 "uuid": "0fe7bbf6-6629-47a9-898e-69371fa11e8b", 00:14:15.754 "assigned_rate_limits": { 00:14:15.754 "rw_ios_per_sec": 0, 00:14:15.754 "rw_mbytes_per_sec": 0, 00:14:15.754 "r_mbytes_per_sec": 0, 00:14:15.754 "w_mbytes_per_sec": 0 00:14:15.754 }, 00:14:15.754 "claimed": false, 00:14:15.754 "zoned": false, 00:14:15.754 "supported_io_types": { 00:14:15.754 "read": true, 00:14:15.754 "write": true, 00:14:15.754 "unmap": true, 00:14:15.754 "flush": true, 00:14:15.754 "reset": true, 00:14:15.754 "nvme_admin": false, 00:14:15.754 "nvme_io": false, 00:14:15.754 "nvme_io_md": false, 00:14:15.754 "write_zeroes": true, 00:14:15.754 "zcopy": true, 00:14:15.754 "get_zone_info": false, 00:14:15.754 "zone_management": false, 00:14:15.754 "zone_append": false, 00:14:15.754 "compare": false, 00:14:15.754 "compare_and_write": false, 00:14:15.754 "abort": true, 00:14:15.754 "seek_hole": false, 00:14:15.754 "seek_data": false, 00:14:15.754 "copy": true, 00:14:15.754 "nvme_iov_md": false 00:14:15.754 }, 00:14:15.754 "memory_domains": [ 00:14:15.754 { 00:14:15.754 "dma_device_id": "system", 00:14:15.754 "dma_device_type": 1 00:14:15.754 }, 00:14:15.754 { 00:14:15.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.754 "dma_device_type": 2 00:14:15.754 } 00:14:15.754 ], 00:14:15.754 "driver_specific": {} 00:14:15.754 } 00:14:15.754 ] 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.755 BaseBdev3 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.755 [ 00:14:15.755 { 00:14:15.755 "name": "BaseBdev3", 00:14:15.755 "aliases": [ 00:14:15.755 "24d48198-513a-4e5b-87e4-55e0439fd096" 00:14:15.755 ], 00:14:15.755 "product_name": "Malloc disk", 00:14:15.755 "block_size": 512, 00:14:15.755 "num_blocks": 65536, 00:14:15.755 "uuid": "24d48198-513a-4e5b-87e4-55e0439fd096", 00:14:15.755 "assigned_rate_limits": { 00:14:15.755 "rw_ios_per_sec": 0, 00:14:15.755 "rw_mbytes_per_sec": 0, 00:14:15.755 "r_mbytes_per_sec": 0, 00:14:15.755 "w_mbytes_per_sec": 0 00:14:15.755 }, 00:14:15.755 "claimed": false, 00:14:15.755 "zoned": false, 00:14:15.755 "supported_io_types": { 00:14:15.755 "read": true, 00:14:15.755 "write": true, 00:14:15.755 "unmap": true, 00:14:15.755 "flush": true, 00:14:15.755 "reset": true, 00:14:15.755 "nvme_admin": false, 00:14:15.755 "nvme_io": false, 00:14:15.755 "nvme_io_md": false, 00:14:15.755 "write_zeroes": true, 00:14:15.755 "zcopy": true, 00:14:15.755 "get_zone_info": false, 00:14:15.755 "zone_management": false, 00:14:15.755 "zone_append": false, 00:14:15.755 "compare": false, 00:14:15.755 "compare_and_write": false, 00:14:15.755 "abort": true, 00:14:15.755 "seek_hole": false, 00:14:15.755 "seek_data": false, 00:14:15.755 "copy": true, 00:14:15.755 "nvme_iov_md": false 00:14:15.755 }, 00:14:15.755 "memory_domains": [ 00:14:15.755 { 00:14:15.755 "dma_device_id": "system", 00:14:15.755 "dma_device_type": 1 00:14:15.755 }, 00:14:15.755 { 00:14:15.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.755 "dma_device_type": 2 00:14:15.755 } 00:14:15.755 ], 00:14:15.755 "driver_specific": {} 00:14:15.755 } 00:14:15.755 ] 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.755 BaseBdev4 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.755 [ 00:14:15.755 { 00:14:15.755 "name": "BaseBdev4", 00:14:15.755 "aliases": [ 00:14:15.755 "84b3af5b-d930-4ded-aeb4-5ded732663df" 00:14:15.755 ], 00:14:15.755 "product_name": "Malloc disk", 00:14:15.755 "block_size": 512, 00:14:15.755 "num_blocks": 65536, 00:14:15.755 "uuid": "84b3af5b-d930-4ded-aeb4-5ded732663df", 00:14:15.755 "assigned_rate_limits": { 00:14:15.755 "rw_ios_per_sec": 0, 00:14:15.755 "rw_mbytes_per_sec": 0, 00:14:15.755 "r_mbytes_per_sec": 0, 00:14:15.755 "w_mbytes_per_sec": 0 00:14:15.755 }, 00:14:15.755 "claimed": false, 00:14:15.755 "zoned": false, 00:14:15.755 "supported_io_types": { 00:14:15.755 "read": true, 00:14:15.755 "write": true, 00:14:15.755 "unmap": true, 00:14:15.755 "flush": true, 00:14:15.755 "reset": true, 00:14:15.755 "nvme_admin": false, 00:14:15.755 "nvme_io": false, 00:14:15.755 "nvme_io_md": false, 00:14:15.755 "write_zeroes": true, 00:14:15.755 "zcopy": true, 00:14:15.755 "get_zone_info": false, 00:14:15.755 "zone_management": false, 00:14:15.755 "zone_append": false, 00:14:15.755 "compare": false, 00:14:15.755 "compare_and_write": false, 00:14:15.755 "abort": true, 00:14:15.755 "seek_hole": false, 00:14:15.755 "seek_data": false, 00:14:15.755 "copy": true, 00:14:15.755 "nvme_iov_md": false 00:14:15.755 }, 00:14:15.755 "memory_domains": [ 00:14:15.755 { 00:14:15.755 "dma_device_id": "system", 00:14:15.755 "dma_device_type": 1 00:14:15.755 }, 00:14:15.755 { 00:14:15.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.755 "dma_device_type": 2 00:14:15.755 } 00:14:15.755 ], 00:14:15.755 "driver_specific": {} 00:14:15.755 } 00:14:15.755 ] 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.755 [2024-11-27 08:46:12.486177] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.755 [2024-11-27 08:46:12.486250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.755 [2024-11-27 08:46:12.486295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.755 [2024-11-27 08:46:12.489025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.755 [2024-11-27 08:46:12.489098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.755 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.756 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.756 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.014 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.014 "name": "Existed_Raid", 00:14:16.014 "uuid": "9dbba25f-6e24-4869-ab02-a05eec067a6a", 00:14:16.014 "strip_size_kb": 64, 00:14:16.014 "state": "configuring", 00:14:16.014 "raid_level": "concat", 00:14:16.014 "superblock": true, 00:14:16.014 "num_base_bdevs": 4, 00:14:16.014 "num_base_bdevs_discovered": 3, 00:14:16.014 "num_base_bdevs_operational": 4, 00:14:16.014 "base_bdevs_list": [ 00:14:16.014 { 00:14:16.014 "name": "BaseBdev1", 00:14:16.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.014 "is_configured": false, 00:14:16.014 "data_offset": 0, 00:14:16.014 "data_size": 0 00:14:16.014 }, 00:14:16.014 { 00:14:16.014 "name": "BaseBdev2", 00:14:16.014 "uuid": "0fe7bbf6-6629-47a9-898e-69371fa11e8b", 00:14:16.014 "is_configured": true, 00:14:16.014 "data_offset": 2048, 00:14:16.014 "data_size": 63488 00:14:16.014 }, 00:14:16.014 { 00:14:16.014 "name": "BaseBdev3", 00:14:16.014 "uuid": "24d48198-513a-4e5b-87e4-55e0439fd096", 00:14:16.014 "is_configured": true, 00:14:16.014 "data_offset": 2048, 00:14:16.014 "data_size": 63488 00:14:16.014 }, 00:14:16.014 { 00:14:16.014 "name": "BaseBdev4", 00:14:16.014 "uuid": "84b3af5b-d930-4ded-aeb4-5ded732663df", 00:14:16.014 "is_configured": true, 00:14:16.014 "data_offset": 2048, 00:14:16.014 "data_size": 63488 00:14:16.014 } 00:14:16.014 ] 00:14:16.014 }' 00:14:16.014 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.014 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.272 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:16.272 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.272 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.272 [2024-11-27 08:46:13.002320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:16.272 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.272 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:16.272 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.272 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.272 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:16.272 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.272 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.272 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.272 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.272 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.272 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.272 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.272 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.272 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.273 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.273 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.531 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.531 "name": "Existed_Raid", 00:14:16.531 "uuid": "9dbba25f-6e24-4869-ab02-a05eec067a6a", 00:14:16.531 "strip_size_kb": 64, 00:14:16.531 "state": "configuring", 00:14:16.531 "raid_level": "concat", 00:14:16.531 "superblock": true, 00:14:16.531 "num_base_bdevs": 4, 00:14:16.531 "num_base_bdevs_discovered": 2, 00:14:16.531 "num_base_bdevs_operational": 4, 00:14:16.531 "base_bdevs_list": [ 00:14:16.531 { 00:14:16.531 "name": "BaseBdev1", 00:14:16.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.531 "is_configured": false, 00:14:16.531 "data_offset": 0, 00:14:16.531 "data_size": 0 00:14:16.531 }, 00:14:16.531 { 00:14:16.531 "name": null, 00:14:16.531 "uuid": "0fe7bbf6-6629-47a9-898e-69371fa11e8b", 00:14:16.531 "is_configured": false, 00:14:16.531 "data_offset": 0, 00:14:16.531 "data_size": 63488 00:14:16.531 }, 00:14:16.531 { 00:14:16.531 "name": "BaseBdev3", 00:14:16.531 "uuid": "24d48198-513a-4e5b-87e4-55e0439fd096", 00:14:16.531 "is_configured": true, 00:14:16.531 "data_offset": 2048, 00:14:16.531 "data_size": 63488 00:14:16.531 }, 00:14:16.531 { 00:14:16.531 "name": "BaseBdev4", 00:14:16.531 "uuid": "84b3af5b-d930-4ded-aeb4-5ded732663df", 00:14:16.531 "is_configured": true, 00:14:16.531 "data_offset": 2048, 00:14:16.531 "data_size": 63488 00:14:16.531 } 00:14:16.531 ] 00:14:16.531 }' 00:14:16.531 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.531 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.790 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:16.790 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.790 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.790 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.790 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.048 [2024-11-27 08:46:13.617671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.048 BaseBdev1 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.048 [ 00:14:17.048 { 00:14:17.048 "name": "BaseBdev1", 00:14:17.048 "aliases": [ 00:14:17.048 "018028a3-c6c6-4cd6-94eb-404234b3e963" 00:14:17.048 ], 00:14:17.048 "product_name": "Malloc disk", 00:14:17.048 "block_size": 512, 00:14:17.048 "num_blocks": 65536, 00:14:17.048 "uuid": "018028a3-c6c6-4cd6-94eb-404234b3e963", 00:14:17.048 "assigned_rate_limits": { 00:14:17.048 "rw_ios_per_sec": 0, 00:14:17.048 "rw_mbytes_per_sec": 0, 00:14:17.048 "r_mbytes_per_sec": 0, 00:14:17.048 "w_mbytes_per_sec": 0 00:14:17.048 }, 00:14:17.048 "claimed": true, 00:14:17.048 "claim_type": "exclusive_write", 00:14:17.048 "zoned": false, 00:14:17.048 "supported_io_types": { 00:14:17.048 "read": true, 00:14:17.048 "write": true, 00:14:17.048 "unmap": true, 00:14:17.048 "flush": true, 00:14:17.048 "reset": true, 00:14:17.048 "nvme_admin": false, 00:14:17.048 "nvme_io": false, 00:14:17.048 "nvme_io_md": false, 00:14:17.048 "write_zeroes": true, 00:14:17.048 "zcopy": true, 00:14:17.048 "get_zone_info": false, 00:14:17.048 "zone_management": false, 00:14:17.048 "zone_append": false, 00:14:17.048 "compare": false, 00:14:17.048 "compare_and_write": false, 00:14:17.048 "abort": true, 00:14:17.048 "seek_hole": false, 00:14:17.048 "seek_data": false, 00:14:17.048 "copy": true, 00:14:17.048 "nvme_iov_md": false 00:14:17.048 }, 00:14:17.048 "memory_domains": [ 00:14:17.048 { 00:14:17.048 "dma_device_id": "system", 00:14:17.048 "dma_device_type": 1 00:14:17.048 }, 00:14:17.048 { 00:14:17.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.048 "dma_device_type": 2 00:14:17.048 } 00:14:17.048 ], 00:14:17.048 "driver_specific": {} 00:14:17.048 } 00:14:17.048 ] 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.048 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.049 "name": "Existed_Raid", 00:14:17.049 "uuid": "9dbba25f-6e24-4869-ab02-a05eec067a6a", 00:14:17.049 "strip_size_kb": 64, 00:14:17.049 "state": "configuring", 00:14:17.049 "raid_level": "concat", 00:14:17.049 "superblock": true, 00:14:17.049 "num_base_bdevs": 4, 00:14:17.049 "num_base_bdevs_discovered": 3, 00:14:17.049 "num_base_bdevs_operational": 4, 00:14:17.049 "base_bdevs_list": [ 00:14:17.049 { 00:14:17.049 "name": "BaseBdev1", 00:14:17.049 "uuid": "018028a3-c6c6-4cd6-94eb-404234b3e963", 00:14:17.049 "is_configured": true, 00:14:17.049 "data_offset": 2048, 00:14:17.049 "data_size": 63488 00:14:17.049 }, 00:14:17.049 { 00:14:17.049 "name": null, 00:14:17.049 "uuid": "0fe7bbf6-6629-47a9-898e-69371fa11e8b", 00:14:17.049 "is_configured": false, 00:14:17.049 "data_offset": 0, 00:14:17.049 "data_size": 63488 00:14:17.049 }, 00:14:17.049 { 00:14:17.049 "name": "BaseBdev3", 00:14:17.049 "uuid": "24d48198-513a-4e5b-87e4-55e0439fd096", 00:14:17.049 "is_configured": true, 00:14:17.049 "data_offset": 2048, 00:14:17.049 "data_size": 63488 00:14:17.049 }, 00:14:17.049 { 00:14:17.049 "name": "BaseBdev4", 00:14:17.049 "uuid": "84b3af5b-d930-4ded-aeb4-5ded732663df", 00:14:17.049 "is_configured": true, 00:14:17.049 "data_offset": 2048, 00:14:17.049 "data_size": 63488 00:14:17.049 } 00:14:17.049 ] 00:14:17.049 }' 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.049 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.615 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.615 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.615 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.615 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:17.615 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.615 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:17.615 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:17.615 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.615 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.615 [2024-11-27 08:46:14.213949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:17.615 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.615 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:17.615 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.616 "name": "Existed_Raid", 00:14:17.616 "uuid": "9dbba25f-6e24-4869-ab02-a05eec067a6a", 00:14:17.616 "strip_size_kb": 64, 00:14:17.616 "state": "configuring", 00:14:17.616 "raid_level": "concat", 00:14:17.616 "superblock": true, 00:14:17.616 "num_base_bdevs": 4, 00:14:17.616 "num_base_bdevs_discovered": 2, 00:14:17.616 "num_base_bdevs_operational": 4, 00:14:17.616 "base_bdevs_list": [ 00:14:17.616 { 00:14:17.616 "name": "BaseBdev1", 00:14:17.616 "uuid": "018028a3-c6c6-4cd6-94eb-404234b3e963", 00:14:17.616 "is_configured": true, 00:14:17.616 "data_offset": 2048, 00:14:17.616 "data_size": 63488 00:14:17.616 }, 00:14:17.616 { 00:14:17.616 "name": null, 00:14:17.616 "uuid": "0fe7bbf6-6629-47a9-898e-69371fa11e8b", 00:14:17.616 "is_configured": false, 00:14:17.616 "data_offset": 0, 00:14:17.616 "data_size": 63488 00:14:17.616 }, 00:14:17.616 { 00:14:17.616 "name": null, 00:14:17.616 "uuid": "24d48198-513a-4e5b-87e4-55e0439fd096", 00:14:17.616 "is_configured": false, 00:14:17.616 "data_offset": 0, 00:14:17.616 "data_size": 63488 00:14:17.616 }, 00:14:17.616 { 00:14:17.616 "name": "BaseBdev4", 00:14:17.616 "uuid": "84b3af5b-d930-4ded-aeb4-5ded732663df", 00:14:17.616 "is_configured": true, 00:14:17.616 "data_offset": 2048, 00:14:17.616 "data_size": 63488 00:14:17.616 } 00:14:17.616 ] 00:14:17.616 }' 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.616 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.183 [2024-11-27 08:46:14.818175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.183 "name": "Existed_Raid", 00:14:18.183 "uuid": "9dbba25f-6e24-4869-ab02-a05eec067a6a", 00:14:18.183 "strip_size_kb": 64, 00:14:18.183 "state": "configuring", 00:14:18.183 "raid_level": "concat", 00:14:18.183 "superblock": true, 00:14:18.183 "num_base_bdevs": 4, 00:14:18.183 "num_base_bdevs_discovered": 3, 00:14:18.183 "num_base_bdevs_operational": 4, 00:14:18.183 "base_bdevs_list": [ 00:14:18.183 { 00:14:18.183 "name": "BaseBdev1", 00:14:18.183 "uuid": "018028a3-c6c6-4cd6-94eb-404234b3e963", 00:14:18.183 "is_configured": true, 00:14:18.183 "data_offset": 2048, 00:14:18.183 "data_size": 63488 00:14:18.183 }, 00:14:18.183 { 00:14:18.183 "name": null, 00:14:18.183 "uuid": "0fe7bbf6-6629-47a9-898e-69371fa11e8b", 00:14:18.183 "is_configured": false, 00:14:18.183 "data_offset": 0, 00:14:18.183 "data_size": 63488 00:14:18.183 }, 00:14:18.183 { 00:14:18.183 "name": "BaseBdev3", 00:14:18.183 "uuid": "24d48198-513a-4e5b-87e4-55e0439fd096", 00:14:18.183 "is_configured": true, 00:14:18.183 "data_offset": 2048, 00:14:18.183 "data_size": 63488 00:14:18.183 }, 00:14:18.183 { 00:14:18.183 "name": "BaseBdev4", 00:14:18.183 "uuid": "84b3af5b-d930-4ded-aeb4-5ded732663df", 00:14:18.183 "is_configured": true, 00:14:18.183 "data_offset": 2048, 00:14:18.183 "data_size": 63488 00:14:18.183 } 00:14:18.183 ] 00:14:18.183 }' 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.183 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.811 [2024-11-27 08:46:15.374429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.811 "name": "Existed_Raid", 00:14:18.811 "uuid": "9dbba25f-6e24-4869-ab02-a05eec067a6a", 00:14:18.811 "strip_size_kb": 64, 00:14:18.811 "state": "configuring", 00:14:18.811 "raid_level": "concat", 00:14:18.811 "superblock": true, 00:14:18.811 "num_base_bdevs": 4, 00:14:18.811 "num_base_bdevs_discovered": 2, 00:14:18.811 "num_base_bdevs_operational": 4, 00:14:18.811 "base_bdevs_list": [ 00:14:18.811 { 00:14:18.811 "name": null, 00:14:18.811 "uuid": "018028a3-c6c6-4cd6-94eb-404234b3e963", 00:14:18.811 "is_configured": false, 00:14:18.811 "data_offset": 0, 00:14:18.811 "data_size": 63488 00:14:18.811 }, 00:14:18.811 { 00:14:18.811 "name": null, 00:14:18.811 "uuid": "0fe7bbf6-6629-47a9-898e-69371fa11e8b", 00:14:18.811 "is_configured": false, 00:14:18.811 "data_offset": 0, 00:14:18.811 "data_size": 63488 00:14:18.811 }, 00:14:18.811 { 00:14:18.811 "name": "BaseBdev3", 00:14:18.811 "uuid": "24d48198-513a-4e5b-87e4-55e0439fd096", 00:14:18.811 "is_configured": true, 00:14:18.811 "data_offset": 2048, 00:14:18.811 "data_size": 63488 00:14:18.811 }, 00:14:18.811 { 00:14:18.811 "name": "BaseBdev4", 00:14:18.811 "uuid": "84b3af5b-d930-4ded-aeb4-5ded732663df", 00:14:18.811 "is_configured": true, 00:14:18.811 "data_offset": 2048, 00:14:18.811 "data_size": 63488 00:14:18.811 } 00:14:18.811 ] 00:14:18.811 }' 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.811 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.378 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.378 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:19.378 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.378 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.378 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.378 [2024-11-27 08:46:16.023370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.378 "name": "Existed_Raid", 00:14:19.378 "uuid": "9dbba25f-6e24-4869-ab02-a05eec067a6a", 00:14:19.378 "strip_size_kb": 64, 00:14:19.378 "state": "configuring", 00:14:19.378 "raid_level": "concat", 00:14:19.378 "superblock": true, 00:14:19.378 "num_base_bdevs": 4, 00:14:19.378 "num_base_bdevs_discovered": 3, 00:14:19.378 "num_base_bdevs_operational": 4, 00:14:19.378 "base_bdevs_list": [ 00:14:19.378 { 00:14:19.378 "name": null, 00:14:19.378 "uuid": "018028a3-c6c6-4cd6-94eb-404234b3e963", 00:14:19.378 "is_configured": false, 00:14:19.378 "data_offset": 0, 00:14:19.378 "data_size": 63488 00:14:19.378 }, 00:14:19.378 { 00:14:19.378 "name": "BaseBdev2", 00:14:19.378 "uuid": "0fe7bbf6-6629-47a9-898e-69371fa11e8b", 00:14:19.378 "is_configured": true, 00:14:19.378 "data_offset": 2048, 00:14:19.378 "data_size": 63488 00:14:19.378 }, 00:14:19.378 { 00:14:19.378 "name": "BaseBdev3", 00:14:19.378 "uuid": "24d48198-513a-4e5b-87e4-55e0439fd096", 00:14:19.378 "is_configured": true, 00:14:19.378 "data_offset": 2048, 00:14:19.378 "data_size": 63488 00:14:19.378 }, 00:14:19.378 { 00:14:19.378 "name": "BaseBdev4", 00:14:19.378 "uuid": "84b3af5b-d930-4ded-aeb4-5ded732663df", 00:14:19.378 "is_configured": true, 00:14:19.378 "data_offset": 2048, 00:14:19.378 "data_size": 63488 00:14:19.378 } 00:14:19.378 ] 00:14:19.378 }' 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.378 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 018028a3-c6c6-4cd6-94eb-404234b3e963 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.944 [2024-11-27 08:46:16.674156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:19.944 [2024-11-27 08:46:16.674558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:19.944 [2024-11-27 08:46:16.674578] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:19.944 NewBaseBdev 00:14:19.944 [2024-11-27 08:46:16.674968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:19.944 [2024-11-27 08:46:16.675186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:19.944 [2024-11-27 08:46:16.675209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:19.944 [2024-11-27 08:46:16.675401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.944 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.944 [ 00:14:19.944 { 00:14:19.944 "name": "NewBaseBdev", 00:14:19.944 "aliases": [ 00:14:19.944 "018028a3-c6c6-4cd6-94eb-404234b3e963" 00:14:19.944 ], 00:14:19.944 "product_name": "Malloc disk", 00:14:19.944 "block_size": 512, 00:14:19.944 "num_blocks": 65536, 00:14:19.944 "uuid": "018028a3-c6c6-4cd6-94eb-404234b3e963", 00:14:19.944 "assigned_rate_limits": { 00:14:19.944 "rw_ios_per_sec": 0, 00:14:19.944 "rw_mbytes_per_sec": 0, 00:14:19.944 "r_mbytes_per_sec": 0, 00:14:19.944 "w_mbytes_per_sec": 0 00:14:19.945 }, 00:14:19.945 "claimed": true, 00:14:19.945 "claim_type": "exclusive_write", 00:14:19.945 "zoned": false, 00:14:19.945 "supported_io_types": { 00:14:19.945 "read": true, 00:14:19.945 "write": true, 00:14:19.945 "unmap": true, 00:14:19.945 "flush": true, 00:14:19.945 "reset": true, 00:14:19.945 "nvme_admin": false, 00:14:19.945 "nvme_io": false, 00:14:20.204 "nvme_io_md": false, 00:14:20.204 "write_zeroes": true, 00:14:20.204 "zcopy": true, 00:14:20.204 "get_zone_info": false, 00:14:20.204 "zone_management": false, 00:14:20.204 "zone_append": false, 00:14:20.204 "compare": false, 00:14:20.204 "compare_and_write": false, 00:14:20.204 "abort": true, 00:14:20.204 "seek_hole": false, 00:14:20.204 "seek_data": false, 00:14:20.204 "copy": true, 00:14:20.204 "nvme_iov_md": false 00:14:20.204 }, 00:14:20.204 "memory_domains": [ 00:14:20.204 { 00:14:20.204 "dma_device_id": "system", 00:14:20.204 "dma_device_type": 1 00:14:20.204 }, 00:14:20.204 { 00:14:20.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.204 "dma_device_type": 2 00:14:20.204 } 00:14:20.204 ], 00:14:20.204 "driver_specific": {} 00:14:20.204 } 00:14:20.204 ] 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.204 "name": "Existed_Raid", 00:14:20.204 "uuid": "9dbba25f-6e24-4869-ab02-a05eec067a6a", 00:14:20.204 "strip_size_kb": 64, 00:14:20.204 "state": "online", 00:14:20.204 "raid_level": "concat", 00:14:20.204 "superblock": true, 00:14:20.204 "num_base_bdevs": 4, 00:14:20.204 "num_base_bdevs_discovered": 4, 00:14:20.204 "num_base_bdevs_operational": 4, 00:14:20.204 "base_bdevs_list": [ 00:14:20.204 { 00:14:20.204 "name": "NewBaseBdev", 00:14:20.204 "uuid": "018028a3-c6c6-4cd6-94eb-404234b3e963", 00:14:20.204 "is_configured": true, 00:14:20.204 "data_offset": 2048, 00:14:20.204 "data_size": 63488 00:14:20.204 }, 00:14:20.204 { 00:14:20.204 "name": "BaseBdev2", 00:14:20.204 "uuid": "0fe7bbf6-6629-47a9-898e-69371fa11e8b", 00:14:20.204 "is_configured": true, 00:14:20.204 "data_offset": 2048, 00:14:20.204 "data_size": 63488 00:14:20.204 }, 00:14:20.204 { 00:14:20.204 "name": "BaseBdev3", 00:14:20.204 "uuid": "24d48198-513a-4e5b-87e4-55e0439fd096", 00:14:20.204 "is_configured": true, 00:14:20.204 "data_offset": 2048, 00:14:20.204 "data_size": 63488 00:14:20.204 }, 00:14:20.204 { 00:14:20.204 "name": "BaseBdev4", 00:14:20.204 "uuid": "84b3af5b-d930-4ded-aeb4-5ded732663df", 00:14:20.204 "is_configured": true, 00:14:20.204 "data_offset": 2048, 00:14:20.204 "data_size": 63488 00:14:20.204 } 00:14:20.204 ] 00:14:20.204 }' 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.204 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.773 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:20.773 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:20.773 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:20.773 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:20.773 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:20.773 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:20.773 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.774 [2024-11-27 08:46:17.234841] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:20.774 "name": "Existed_Raid", 00:14:20.774 "aliases": [ 00:14:20.774 "9dbba25f-6e24-4869-ab02-a05eec067a6a" 00:14:20.774 ], 00:14:20.774 "product_name": "Raid Volume", 00:14:20.774 "block_size": 512, 00:14:20.774 "num_blocks": 253952, 00:14:20.774 "uuid": "9dbba25f-6e24-4869-ab02-a05eec067a6a", 00:14:20.774 "assigned_rate_limits": { 00:14:20.774 "rw_ios_per_sec": 0, 00:14:20.774 "rw_mbytes_per_sec": 0, 00:14:20.774 "r_mbytes_per_sec": 0, 00:14:20.774 "w_mbytes_per_sec": 0 00:14:20.774 }, 00:14:20.774 "claimed": false, 00:14:20.774 "zoned": false, 00:14:20.774 "supported_io_types": { 00:14:20.774 "read": true, 00:14:20.774 "write": true, 00:14:20.774 "unmap": true, 00:14:20.774 "flush": true, 00:14:20.774 "reset": true, 00:14:20.774 "nvme_admin": false, 00:14:20.774 "nvme_io": false, 00:14:20.774 "nvme_io_md": false, 00:14:20.774 "write_zeroes": true, 00:14:20.774 "zcopy": false, 00:14:20.774 "get_zone_info": false, 00:14:20.774 "zone_management": false, 00:14:20.774 "zone_append": false, 00:14:20.774 "compare": false, 00:14:20.774 "compare_and_write": false, 00:14:20.774 "abort": false, 00:14:20.774 "seek_hole": false, 00:14:20.774 "seek_data": false, 00:14:20.774 "copy": false, 00:14:20.774 "nvme_iov_md": false 00:14:20.774 }, 00:14:20.774 "memory_domains": [ 00:14:20.774 { 00:14:20.774 "dma_device_id": "system", 00:14:20.774 "dma_device_type": 1 00:14:20.774 }, 00:14:20.774 { 00:14:20.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.774 "dma_device_type": 2 00:14:20.774 }, 00:14:20.774 { 00:14:20.774 "dma_device_id": "system", 00:14:20.774 "dma_device_type": 1 00:14:20.774 }, 00:14:20.774 { 00:14:20.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.774 "dma_device_type": 2 00:14:20.774 }, 00:14:20.774 { 00:14:20.774 "dma_device_id": "system", 00:14:20.774 "dma_device_type": 1 00:14:20.774 }, 00:14:20.774 { 00:14:20.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.774 "dma_device_type": 2 00:14:20.774 }, 00:14:20.774 { 00:14:20.774 "dma_device_id": "system", 00:14:20.774 "dma_device_type": 1 00:14:20.774 }, 00:14:20.774 { 00:14:20.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.774 "dma_device_type": 2 00:14:20.774 } 00:14:20.774 ], 00:14:20.774 "driver_specific": { 00:14:20.774 "raid": { 00:14:20.774 "uuid": "9dbba25f-6e24-4869-ab02-a05eec067a6a", 00:14:20.774 "strip_size_kb": 64, 00:14:20.774 "state": "online", 00:14:20.774 "raid_level": "concat", 00:14:20.774 "superblock": true, 00:14:20.774 "num_base_bdevs": 4, 00:14:20.774 "num_base_bdevs_discovered": 4, 00:14:20.774 "num_base_bdevs_operational": 4, 00:14:20.774 "base_bdevs_list": [ 00:14:20.774 { 00:14:20.774 "name": "NewBaseBdev", 00:14:20.774 "uuid": "018028a3-c6c6-4cd6-94eb-404234b3e963", 00:14:20.774 "is_configured": true, 00:14:20.774 "data_offset": 2048, 00:14:20.774 "data_size": 63488 00:14:20.774 }, 00:14:20.774 { 00:14:20.774 "name": "BaseBdev2", 00:14:20.774 "uuid": "0fe7bbf6-6629-47a9-898e-69371fa11e8b", 00:14:20.774 "is_configured": true, 00:14:20.774 "data_offset": 2048, 00:14:20.774 "data_size": 63488 00:14:20.774 }, 00:14:20.774 { 00:14:20.774 "name": "BaseBdev3", 00:14:20.774 "uuid": "24d48198-513a-4e5b-87e4-55e0439fd096", 00:14:20.774 "is_configured": true, 00:14:20.774 "data_offset": 2048, 00:14:20.774 "data_size": 63488 00:14:20.774 }, 00:14:20.774 { 00:14:20.774 "name": "BaseBdev4", 00:14:20.774 "uuid": "84b3af5b-d930-4ded-aeb4-5ded732663df", 00:14:20.774 "is_configured": true, 00:14:20.774 "data_offset": 2048, 00:14:20.774 "data_size": 63488 00:14:20.774 } 00:14:20.774 ] 00:14:20.774 } 00:14:20.774 } 00:14:20.774 }' 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:20.774 BaseBdev2 00:14:20.774 BaseBdev3 00:14:20.774 BaseBdev4' 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.774 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.034 [2024-11-27 08:46:17.606426] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:21.034 [2024-11-27 08:46:17.606485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.034 [2024-11-27 08:46:17.606607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.034 [2024-11-27 08:46:17.606715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.034 [2024-11-27 08:46:17.606747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72234 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' -z 72234 ']' 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # kill -0 72234 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # uname 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 72234 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # echo 'killing process with pid 72234' 00:14:21.034 killing process with pid 72234 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # kill 72234 00:14:21.034 [2024-11-27 08:46:17.648953] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:21.034 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@975 -- # wait 72234 00:14:21.293 [2024-11-27 08:46:18.021850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:22.670 ************************************ 00:14:22.670 END TEST raid_state_function_test_sb 00:14:22.670 ************************************ 00:14:22.670 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:22.670 00:14:22.670 real 0m13.033s 00:14:22.670 user 0m21.463s 00:14:22.670 sys 0m1.868s 00:14:22.670 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # xtrace_disable 00:14:22.670 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.670 08:46:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:14:22.670 08:46:19 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:14:22.670 08:46:19 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:14:22.670 08:46:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:22.670 ************************************ 00:14:22.670 START TEST raid_superblock_test 00:14:22.670 ************************************ 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # raid_superblock_test concat 4 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72910 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72910 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # '[' -z 72910 ']' 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:14:22.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:14:22.670 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.670 [2024-11-27 08:46:19.321303] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:14:22.670 [2024-11-27 08:46:19.321484] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72910 ] 00:14:22.929 [2024-11-27 08:46:19.496500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.929 [2024-11-27 08:46:19.646705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.187 [2024-11-27 08:46:19.872906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.187 [2024-11-27 08:46:19.872976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@865 -- # return 0 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.755 malloc1 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.755 [2024-11-27 08:46:20.338534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:23.755 [2024-11-27 08:46:20.338816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.755 [2024-11-27 08:46:20.338896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:23.755 [2024-11-27 08:46:20.339110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.755 [2024-11-27 08:46:20.342258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.755 [2024-11-27 08:46:20.342461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:23.755 pt1 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:23.755 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.756 malloc2 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.756 [2024-11-27 08:46:20.399815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:23.756 [2024-11-27 08:46:20.400017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.756 [2024-11-27 08:46:20.400098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:23.756 [2024-11-27 08:46:20.400210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.756 [2024-11-27 08:46:20.403279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.756 [2024-11-27 08:46:20.403484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:23.756 pt2 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.756 malloc3 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.756 [2024-11-27 08:46:20.467283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:23.756 [2024-11-27 08:46:20.467523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.756 [2024-11-27 08:46:20.467601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:23.756 [2024-11-27 08:46:20.467729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.756 [2024-11-27 08:46:20.470751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.756 [2024-11-27 08:46:20.470951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:23.756 pt3 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.756 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.016 malloc4 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.016 [2024-11-27 08:46:20.528030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:24.016 [2024-11-27 08:46:20.528095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.016 [2024-11-27 08:46:20.528128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:24.016 [2024-11-27 08:46:20.528143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.016 [2024-11-27 08:46:20.531289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.016 [2024-11-27 08:46:20.531488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:24.016 pt4 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.016 [2024-11-27 08:46:20.540170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:24.016 [2024-11-27 08:46:20.542871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:24.016 [2024-11-27 08:46:20.543105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:24.016 [2024-11-27 08:46:20.543217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:24.016 [2024-11-27 08:46:20.543536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:24.016 [2024-11-27 08:46:20.543558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:24.016 [2024-11-27 08:46:20.543942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:24.016 [2024-11-27 08:46:20.544175] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:24.016 [2024-11-27 08:46:20.544198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:24.016 [2024-11-27 08:46:20.544485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.016 "name": "raid_bdev1", 00:14:24.016 "uuid": "0003eccb-ca57-4df2-82d4-698cec3523a6", 00:14:24.016 "strip_size_kb": 64, 00:14:24.016 "state": "online", 00:14:24.016 "raid_level": "concat", 00:14:24.016 "superblock": true, 00:14:24.016 "num_base_bdevs": 4, 00:14:24.016 "num_base_bdevs_discovered": 4, 00:14:24.016 "num_base_bdevs_operational": 4, 00:14:24.016 "base_bdevs_list": [ 00:14:24.016 { 00:14:24.016 "name": "pt1", 00:14:24.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:24.016 "is_configured": true, 00:14:24.016 "data_offset": 2048, 00:14:24.016 "data_size": 63488 00:14:24.016 }, 00:14:24.016 { 00:14:24.016 "name": "pt2", 00:14:24.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.016 "is_configured": true, 00:14:24.016 "data_offset": 2048, 00:14:24.016 "data_size": 63488 00:14:24.016 }, 00:14:24.016 { 00:14:24.016 "name": "pt3", 00:14:24.016 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:24.016 "is_configured": true, 00:14:24.016 "data_offset": 2048, 00:14:24.016 "data_size": 63488 00:14:24.016 }, 00:14:24.016 { 00:14:24.016 "name": "pt4", 00:14:24.016 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:24.016 "is_configured": true, 00:14:24.016 "data_offset": 2048, 00:14:24.016 "data_size": 63488 00:14:24.016 } 00:14:24.016 ] 00:14:24.016 }' 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.016 08:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.586 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:24.586 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:24.586 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:24.586 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:24.586 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:24.586 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:24.586 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.587 [2024-11-27 08:46:21.089030] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:24.587 "name": "raid_bdev1", 00:14:24.587 "aliases": [ 00:14:24.587 "0003eccb-ca57-4df2-82d4-698cec3523a6" 00:14:24.587 ], 00:14:24.587 "product_name": "Raid Volume", 00:14:24.587 "block_size": 512, 00:14:24.587 "num_blocks": 253952, 00:14:24.587 "uuid": "0003eccb-ca57-4df2-82d4-698cec3523a6", 00:14:24.587 "assigned_rate_limits": { 00:14:24.587 "rw_ios_per_sec": 0, 00:14:24.587 "rw_mbytes_per_sec": 0, 00:14:24.587 "r_mbytes_per_sec": 0, 00:14:24.587 "w_mbytes_per_sec": 0 00:14:24.587 }, 00:14:24.587 "claimed": false, 00:14:24.587 "zoned": false, 00:14:24.587 "supported_io_types": { 00:14:24.587 "read": true, 00:14:24.587 "write": true, 00:14:24.587 "unmap": true, 00:14:24.587 "flush": true, 00:14:24.587 "reset": true, 00:14:24.587 "nvme_admin": false, 00:14:24.587 "nvme_io": false, 00:14:24.587 "nvme_io_md": false, 00:14:24.587 "write_zeroes": true, 00:14:24.587 "zcopy": false, 00:14:24.587 "get_zone_info": false, 00:14:24.587 "zone_management": false, 00:14:24.587 "zone_append": false, 00:14:24.587 "compare": false, 00:14:24.587 "compare_and_write": false, 00:14:24.587 "abort": false, 00:14:24.587 "seek_hole": false, 00:14:24.587 "seek_data": false, 00:14:24.587 "copy": false, 00:14:24.587 "nvme_iov_md": false 00:14:24.587 }, 00:14:24.587 "memory_domains": [ 00:14:24.587 { 00:14:24.587 "dma_device_id": "system", 00:14:24.587 "dma_device_type": 1 00:14:24.587 }, 00:14:24.587 { 00:14:24.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.587 "dma_device_type": 2 00:14:24.587 }, 00:14:24.587 { 00:14:24.587 "dma_device_id": "system", 00:14:24.587 "dma_device_type": 1 00:14:24.587 }, 00:14:24.587 { 00:14:24.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.587 "dma_device_type": 2 00:14:24.587 }, 00:14:24.587 { 00:14:24.587 "dma_device_id": "system", 00:14:24.587 "dma_device_type": 1 00:14:24.587 }, 00:14:24.587 { 00:14:24.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.587 "dma_device_type": 2 00:14:24.587 }, 00:14:24.587 { 00:14:24.587 "dma_device_id": "system", 00:14:24.587 "dma_device_type": 1 00:14:24.587 }, 00:14:24.587 { 00:14:24.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.587 "dma_device_type": 2 00:14:24.587 } 00:14:24.587 ], 00:14:24.587 "driver_specific": { 00:14:24.587 "raid": { 00:14:24.587 "uuid": "0003eccb-ca57-4df2-82d4-698cec3523a6", 00:14:24.587 "strip_size_kb": 64, 00:14:24.587 "state": "online", 00:14:24.587 "raid_level": "concat", 00:14:24.587 "superblock": true, 00:14:24.587 "num_base_bdevs": 4, 00:14:24.587 "num_base_bdevs_discovered": 4, 00:14:24.587 "num_base_bdevs_operational": 4, 00:14:24.587 "base_bdevs_list": [ 00:14:24.587 { 00:14:24.587 "name": "pt1", 00:14:24.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:24.587 "is_configured": true, 00:14:24.587 "data_offset": 2048, 00:14:24.587 "data_size": 63488 00:14:24.587 }, 00:14:24.587 { 00:14:24.587 "name": "pt2", 00:14:24.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.587 "is_configured": true, 00:14:24.587 "data_offset": 2048, 00:14:24.587 "data_size": 63488 00:14:24.587 }, 00:14:24.587 { 00:14:24.587 "name": "pt3", 00:14:24.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:24.587 "is_configured": true, 00:14:24.587 "data_offset": 2048, 00:14:24.587 "data_size": 63488 00:14:24.587 }, 00:14:24.587 { 00:14:24.587 "name": "pt4", 00:14:24.587 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:24.587 "is_configured": true, 00:14:24.587 "data_offset": 2048, 00:14:24.587 "data_size": 63488 00:14:24.587 } 00:14:24.587 ] 00:14:24.587 } 00:14:24.587 } 00:14:24.587 }' 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:24.587 pt2 00:14:24.587 pt3 00:14:24.587 pt4' 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.587 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:24.846 [2024-11-27 08:46:21.437027] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0003eccb-ca57-4df2-82d4-698cec3523a6 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0003eccb-ca57-4df2-82d4-698cec3523a6 ']' 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.846 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 [2024-11-27 08:46:21.492651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.847 [2024-11-27 08:46:21.492683] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.847 [2024-11-27 08:46:21.492812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.847 [2024-11-27 08:46:21.492910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.847 [2024-11-27 08:46:21.492933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.847 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.106 [2024-11-27 08:46:21.676802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:25.106 [2024-11-27 08:46:21.679815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:25.106 [2024-11-27 08:46:21.680021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:25.106 [2024-11-27 08:46:21.680225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:25.106 [2024-11-27 08:46:21.680462] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:25.106 [2024-11-27 08:46:21.680684] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:25.106 [2024-11-27 08:46:21.680855] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:25.106 [2024-11-27 08:46:21.681028] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:25.106 [2024-11-27 08:46:21.681198] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.106 [2024-11-27 08:46:21.681330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:25.106 request: 00:14:25.106 { 00:14:25.106 "name": "raid_bdev1", 00:14:25.106 "raid_level": "concat", 00:14:25.106 "base_bdevs": [ 00:14:25.106 "malloc1", 00:14:25.106 "malloc2", 00:14:25.106 "malloc3", 00:14:25.106 "malloc4" 00:14:25.106 ], 00:14:25.106 "strip_size_kb": 64, 00:14:25.106 "superblock": false, 00:14:25.106 "method": "bdev_raid_create", 00:14:25.106 "req_id": 1 00:14:25.106 } 00:14:25.106 Got JSON-RPC error response 00:14:25.106 response: 00:14:25.106 { 00:14:25.106 "code": -17, 00:14:25.106 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:25.106 } 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.106 [2024-11-27 08:46:21.741805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:25.106 [2024-11-27 08:46:21.742031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.106 [2024-11-27 08:46:21.742179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:25.106 [2024-11-27 08:46:21.742365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.106 [2024-11-27 08:46:21.745660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.106 [2024-11-27 08:46:21.745824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:25.106 [2024-11-27 08:46:21.746116] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:25.106 [2024-11-27 08:46:21.746359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:25.106 pt1 00:14:25.106 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.107 "name": "raid_bdev1", 00:14:25.107 "uuid": "0003eccb-ca57-4df2-82d4-698cec3523a6", 00:14:25.107 "strip_size_kb": 64, 00:14:25.107 "state": "configuring", 00:14:25.107 "raid_level": "concat", 00:14:25.107 "superblock": true, 00:14:25.107 "num_base_bdevs": 4, 00:14:25.107 "num_base_bdevs_discovered": 1, 00:14:25.107 "num_base_bdevs_operational": 4, 00:14:25.107 "base_bdevs_list": [ 00:14:25.107 { 00:14:25.107 "name": "pt1", 00:14:25.107 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:25.107 "is_configured": true, 00:14:25.107 "data_offset": 2048, 00:14:25.107 "data_size": 63488 00:14:25.107 }, 00:14:25.107 { 00:14:25.107 "name": null, 00:14:25.107 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:25.107 "is_configured": false, 00:14:25.107 "data_offset": 2048, 00:14:25.107 "data_size": 63488 00:14:25.107 }, 00:14:25.107 { 00:14:25.107 "name": null, 00:14:25.107 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:25.107 "is_configured": false, 00:14:25.107 "data_offset": 2048, 00:14:25.107 "data_size": 63488 00:14:25.107 }, 00:14:25.107 { 00:14:25.107 "name": null, 00:14:25.107 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:25.107 "is_configured": false, 00:14:25.107 "data_offset": 2048, 00:14:25.107 "data_size": 63488 00:14:25.107 } 00:14:25.107 ] 00:14:25.107 }' 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.107 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.674 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.675 [2024-11-27 08:46:22.238394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:25.675 [2024-11-27 08:46:22.238647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.675 [2024-11-27 08:46:22.238691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:25.675 [2024-11-27 08:46:22.238712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.675 [2024-11-27 08:46:22.239368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.675 [2024-11-27 08:46:22.239401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:25.675 [2024-11-27 08:46:22.239519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:25.675 [2024-11-27 08:46:22.239560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:25.675 pt2 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.675 [2024-11-27 08:46:22.246347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.675 "name": "raid_bdev1", 00:14:25.675 "uuid": "0003eccb-ca57-4df2-82d4-698cec3523a6", 00:14:25.675 "strip_size_kb": 64, 00:14:25.675 "state": "configuring", 00:14:25.675 "raid_level": "concat", 00:14:25.675 "superblock": true, 00:14:25.675 "num_base_bdevs": 4, 00:14:25.675 "num_base_bdevs_discovered": 1, 00:14:25.675 "num_base_bdevs_operational": 4, 00:14:25.675 "base_bdevs_list": [ 00:14:25.675 { 00:14:25.675 "name": "pt1", 00:14:25.675 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:25.675 "is_configured": true, 00:14:25.675 "data_offset": 2048, 00:14:25.675 "data_size": 63488 00:14:25.675 }, 00:14:25.675 { 00:14:25.675 "name": null, 00:14:25.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:25.675 "is_configured": false, 00:14:25.675 "data_offset": 0, 00:14:25.675 "data_size": 63488 00:14:25.675 }, 00:14:25.675 { 00:14:25.675 "name": null, 00:14:25.675 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:25.675 "is_configured": false, 00:14:25.675 "data_offset": 2048, 00:14:25.675 "data_size": 63488 00:14:25.675 }, 00:14:25.675 { 00:14:25.675 "name": null, 00:14:25.675 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:25.675 "is_configured": false, 00:14:25.675 "data_offset": 2048, 00:14:25.675 "data_size": 63488 00:14:25.675 } 00:14:25.675 ] 00:14:25.675 }' 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.675 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.242 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:26.242 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:26.242 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:26.242 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.242 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.242 [2024-11-27 08:46:22.762569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:26.243 [2024-11-27 08:46:22.762905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.243 [2024-11-27 08:46:22.762955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:26.243 [2024-11-27 08:46:22.762973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.243 [2024-11-27 08:46:22.763694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.243 [2024-11-27 08:46:22.763722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:26.243 [2024-11-27 08:46:22.763908] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:26.243 [2024-11-27 08:46:22.763942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:26.243 pt2 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.243 [2024-11-27 08:46:22.770522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:26.243 [2024-11-27 08:46:22.770587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.243 [2024-11-27 08:46:22.770626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:26.243 [2024-11-27 08:46:22.770652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.243 [2024-11-27 08:46:22.771192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.243 [2024-11-27 08:46:22.771240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:26.243 [2024-11-27 08:46:22.771337] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:26.243 [2024-11-27 08:46:22.771404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:26.243 pt3 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.243 [2024-11-27 08:46:22.782507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:26.243 [2024-11-27 08:46:22.782577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.243 [2024-11-27 08:46:22.782611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:26.243 [2024-11-27 08:46:22.782626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.243 [2024-11-27 08:46:22.783201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.243 [2024-11-27 08:46:22.783234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:26.243 [2024-11-27 08:46:22.783398] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:26.243 [2024-11-27 08:46:22.783434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:26.243 [2024-11-27 08:46:22.783643] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:26.243 [2024-11-27 08:46:22.783659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:26.243 [2024-11-27 08:46:22.784020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:26.243 [2024-11-27 08:46:22.784229] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:26.243 [2024-11-27 08:46:22.784253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:26.243 [2024-11-27 08:46:22.784520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.243 pt4 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.243 "name": "raid_bdev1", 00:14:26.243 "uuid": "0003eccb-ca57-4df2-82d4-698cec3523a6", 00:14:26.243 "strip_size_kb": 64, 00:14:26.243 "state": "online", 00:14:26.243 "raid_level": "concat", 00:14:26.243 "superblock": true, 00:14:26.243 "num_base_bdevs": 4, 00:14:26.243 "num_base_bdevs_discovered": 4, 00:14:26.243 "num_base_bdevs_operational": 4, 00:14:26.243 "base_bdevs_list": [ 00:14:26.243 { 00:14:26.243 "name": "pt1", 00:14:26.243 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:26.243 "is_configured": true, 00:14:26.243 "data_offset": 2048, 00:14:26.243 "data_size": 63488 00:14:26.243 }, 00:14:26.243 { 00:14:26.243 "name": "pt2", 00:14:26.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.243 "is_configured": true, 00:14:26.243 "data_offset": 2048, 00:14:26.243 "data_size": 63488 00:14:26.243 }, 00:14:26.243 { 00:14:26.243 "name": "pt3", 00:14:26.243 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:26.243 "is_configured": true, 00:14:26.243 "data_offset": 2048, 00:14:26.243 "data_size": 63488 00:14:26.243 }, 00:14:26.243 { 00:14:26.243 "name": "pt4", 00:14:26.243 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:26.243 "is_configured": true, 00:14:26.243 "data_offset": 2048, 00:14:26.243 "data_size": 63488 00:14:26.243 } 00:14:26.243 ] 00:14:26.243 }' 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.243 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.812 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:26.812 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:26.812 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:26.812 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:26.812 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:26.812 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:26.812 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:26.812 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:26.812 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.812 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.812 [2024-11-27 08:46:23.315125] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.812 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.812 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:26.812 "name": "raid_bdev1", 00:14:26.812 "aliases": [ 00:14:26.812 "0003eccb-ca57-4df2-82d4-698cec3523a6" 00:14:26.812 ], 00:14:26.812 "product_name": "Raid Volume", 00:14:26.812 "block_size": 512, 00:14:26.812 "num_blocks": 253952, 00:14:26.812 "uuid": "0003eccb-ca57-4df2-82d4-698cec3523a6", 00:14:26.812 "assigned_rate_limits": { 00:14:26.812 "rw_ios_per_sec": 0, 00:14:26.812 "rw_mbytes_per_sec": 0, 00:14:26.812 "r_mbytes_per_sec": 0, 00:14:26.812 "w_mbytes_per_sec": 0 00:14:26.812 }, 00:14:26.812 "claimed": false, 00:14:26.812 "zoned": false, 00:14:26.812 "supported_io_types": { 00:14:26.812 "read": true, 00:14:26.812 "write": true, 00:14:26.812 "unmap": true, 00:14:26.812 "flush": true, 00:14:26.812 "reset": true, 00:14:26.812 "nvme_admin": false, 00:14:26.812 "nvme_io": false, 00:14:26.812 "nvme_io_md": false, 00:14:26.812 "write_zeroes": true, 00:14:26.812 "zcopy": false, 00:14:26.812 "get_zone_info": false, 00:14:26.812 "zone_management": false, 00:14:26.812 "zone_append": false, 00:14:26.812 "compare": false, 00:14:26.812 "compare_and_write": false, 00:14:26.812 "abort": false, 00:14:26.812 "seek_hole": false, 00:14:26.812 "seek_data": false, 00:14:26.812 "copy": false, 00:14:26.812 "nvme_iov_md": false 00:14:26.812 }, 00:14:26.812 "memory_domains": [ 00:14:26.812 { 00:14:26.812 "dma_device_id": "system", 00:14:26.812 "dma_device_type": 1 00:14:26.812 }, 00:14:26.812 { 00:14:26.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.812 "dma_device_type": 2 00:14:26.812 }, 00:14:26.812 { 00:14:26.812 "dma_device_id": "system", 00:14:26.812 "dma_device_type": 1 00:14:26.812 }, 00:14:26.812 { 00:14:26.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.812 "dma_device_type": 2 00:14:26.812 }, 00:14:26.812 { 00:14:26.812 "dma_device_id": "system", 00:14:26.812 "dma_device_type": 1 00:14:26.812 }, 00:14:26.812 { 00:14:26.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.812 "dma_device_type": 2 00:14:26.812 }, 00:14:26.812 { 00:14:26.812 "dma_device_id": "system", 00:14:26.812 "dma_device_type": 1 00:14:26.812 }, 00:14:26.812 { 00:14:26.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.812 "dma_device_type": 2 00:14:26.812 } 00:14:26.812 ], 00:14:26.812 "driver_specific": { 00:14:26.812 "raid": { 00:14:26.812 "uuid": "0003eccb-ca57-4df2-82d4-698cec3523a6", 00:14:26.812 "strip_size_kb": 64, 00:14:26.812 "state": "online", 00:14:26.813 "raid_level": "concat", 00:14:26.813 "superblock": true, 00:14:26.813 "num_base_bdevs": 4, 00:14:26.813 "num_base_bdevs_discovered": 4, 00:14:26.813 "num_base_bdevs_operational": 4, 00:14:26.813 "base_bdevs_list": [ 00:14:26.813 { 00:14:26.813 "name": "pt1", 00:14:26.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:26.813 "is_configured": true, 00:14:26.813 "data_offset": 2048, 00:14:26.813 "data_size": 63488 00:14:26.813 }, 00:14:26.813 { 00:14:26.813 "name": "pt2", 00:14:26.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.813 "is_configured": true, 00:14:26.813 "data_offset": 2048, 00:14:26.813 "data_size": 63488 00:14:26.813 }, 00:14:26.813 { 00:14:26.813 "name": "pt3", 00:14:26.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:26.813 "is_configured": true, 00:14:26.813 "data_offset": 2048, 00:14:26.813 "data_size": 63488 00:14:26.813 }, 00:14:26.813 { 00:14:26.813 "name": "pt4", 00:14:26.813 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:26.813 "is_configured": true, 00:14:26.813 "data_offset": 2048, 00:14:26.813 "data_size": 63488 00:14:26.813 } 00:14:26.813 ] 00:14:26.813 } 00:14:26.813 } 00:14:26.813 }' 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:26.813 pt2 00:14:26.813 pt3 00:14:26.813 pt4' 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.813 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.072 [2024-11-27 08:46:23.695190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0003eccb-ca57-4df2-82d4-698cec3523a6 '!=' 0003eccb-ca57-4df2-82d4-698cec3523a6 ']' 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72910 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' -z 72910 ']' 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # kill -0 72910 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # uname 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 72910 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:14:27.072 killing process with pid 72910 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 72910' 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # kill 72910 00:14:27.072 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@975 -- # wait 72910 00:14:27.072 [2024-11-27 08:46:23.767760] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.072 [2024-11-27 08:46:23.767900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.072 [2024-11-27 08:46:23.768020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.072 [2024-11-27 08:46:23.768038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:27.641 [2024-11-27 08:46:24.154012] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.577 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:28.577 00:14:28.577 real 0m6.063s 00:14:28.577 user 0m8.988s 00:14:28.577 sys 0m0.960s 00:14:28.577 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:14:28.577 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.577 ************************************ 00:14:28.577 END TEST raid_superblock_test 00:14:28.577 ************************************ 00:14:28.577 08:46:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:14:28.577 08:46:25 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:14:28.577 08:46:25 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:14:28.577 08:46:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.577 ************************************ 00:14:28.577 START TEST raid_read_error_test 00:14:28.577 ************************************ 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test concat 4 read 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.er4efs5CMl 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73181 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73181 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # '[' -z 73181 ']' 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:14:28.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:14:28.577 08:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.835 [2024-11-27 08:46:25.426038] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:14:28.835 [2024-11-27 08:46:25.426239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73181 ] 00:14:29.094 [2024-11-27 08:46:25.619015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.094 [2024-11-27 08:46:25.765999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.352 [2024-11-27 08:46:25.989375] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.352 [2024-11-27 08:46:25.989476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.919 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:14:29.919 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@865 -- # return 0 00:14:29.919 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.920 BaseBdev1_malloc 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.920 true 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.920 [2024-11-27 08:46:26.531530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:29.920 [2024-11-27 08:46:26.531608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.920 [2024-11-27 08:46:26.531638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:29.920 [2024-11-27 08:46:26.531656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.920 [2024-11-27 08:46:26.534697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.920 [2024-11-27 08:46:26.534746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:29.920 BaseBdev1 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.920 BaseBdev2_malloc 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.920 true 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.920 [2024-11-27 08:46:26.595781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:29.920 [2024-11-27 08:46:26.595854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.920 [2024-11-27 08:46:26.595879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:29.920 [2024-11-27 08:46:26.595897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.920 [2024-11-27 08:46:26.598854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.920 [2024-11-27 08:46:26.598911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:29.920 BaseBdev2 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.920 BaseBdev3_malloc 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.920 true 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.920 [2024-11-27 08:46:26.671811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:29.920 [2024-11-27 08:46:26.671885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.920 [2024-11-27 08:46:26.671913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:29.920 [2024-11-27 08:46:26.671932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.920 [2024-11-27 08:46:26.674930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.920 [2024-11-27 08:46:26.674995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:29.920 BaseBdev3 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:29.920 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.179 BaseBdev4_malloc 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.179 true 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.179 [2024-11-27 08:46:26.740923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:30.179 [2024-11-27 08:46:26.740996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.179 [2024-11-27 08:46:26.741025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:30.179 [2024-11-27 08:46:26.741043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.179 [2024-11-27 08:46:26.744037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.179 [2024-11-27 08:46:26.744121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:30.179 BaseBdev4 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.179 [2024-11-27 08:46:26.753099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.179 [2024-11-27 08:46:26.755765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.179 [2024-11-27 08:46:26.755892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.179 [2024-11-27 08:46:26.756035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:30.179 [2024-11-27 08:46:26.756347] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:30.179 [2024-11-27 08:46:26.756380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:30.179 [2024-11-27 08:46:26.756697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:30.179 [2024-11-27 08:46:26.756961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:30.179 [2024-11-27 08:46:26.756992] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:30.179 [2024-11-27 08:46:26.757233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.179 "name": "raid_bdev1", 00:14:30.179 "uuid": "50137c90-421f-4609-b3a4-6c1c6ecd3a50", 00:14:30.179 "strip_size_kb": 64, 00:14:30.179 "state": "online", 00:14:30.179 "raid_level": "concat", 00:14:30.179 "superblock": true, 00:14:30.179 "num_base_bdevs": 4, 00:14:30.179 "num_base_bdevs_discovered": 4, 00:14:30.179 "num_base_bdevs_operational": 4, 00:14:30.179 "base_bdevs_list": [ 00:14:30.179 { 00:14:30.179 "name": "BaseBdev1", 00:14:30.179 "uuid": "d4959029-2649-5f23-a450-28ebe137d933", 00:14:30.179 "is_configured": true, 00:14:30.179 "data_offset": 2048, 00:14:30.179 "data_size": 63488 00:14:30.179 }, 00:14:30.179 { 00:14:30.179 "name": "BaseBdev2", 00:14:30.179 "uuid": "57cc1156-47fd-500b-853e-52ed0083f778", 00:14:30.179 "is_configured": true, 00:14:30.179 "data_offset": 2048, 00:14:30.179 "data_size": 63488 00:14:30.179 }, 00:14:30.179 { 00:14:30.179 "name": "BaseBdev3", 00:14:30.179 "uuid": "3d6fe688-5dba-5835-88bf-69f7899d0546", 00:14:30.179 "is_configured": true, 00:14:30.179 "data_offset": 2048, 00:14:30.179 "data_size": 63488 00:14:30.179 }, 00:14:30.179 { 00:14:30.179 "name": "BaseBdev4", 00:14:30.179 "uuid": "015bda10-6fb6-5581-9011-150dd33c67b7", 00:14:30.179 "is_configured": true, 00:14:30.179 "data_offset": 2048, 00:14:30.179 "data_size": 63488 00:14:30.179 } 00:14:30.179 ] 00:14:30.179 }' 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.179 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.742 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:30.743 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:30.743 [2024-11-27 08:46:27.375015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.674 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.674 "name": "raid_bdev1", 00:14:31.674 "uuid": "50137c90-421f-4609-b3a4-6c1c6ecd3a50", 00:14:31.674 "strip_size_kb": 64, 00:14:31.674 "state": "online", 00:14:31.674 "raid_level": "concat", 00:14:31.674 "superblock": true, 00:14:31.674 "num_base_bdevs": 4, 00:14:31.674 "num_base_bdevs_discovered": 4, 00:14:31.674 "num_base_bdevs_operational": 4, 00:14:31.674 "base_bdevs_list": [ 00:14:31.674 { 00:14:31.674 "name": "BaseBdev1", 00:14:31.674 "uuid": "d4959029-2649-5f23-a450-28ebe137d933", 00:14:31.674 "is_configured": true, 00:14:31.674 "data_offset": 2048, 00:14:31.674 "data_size": 63488 00:14:31.674 }, 00:14:31.674 { 00:14:31.675 "name": "BaseBdev2", 00:14:31.675 "uuid": "57cc1156-47fd-500b-853e-52ed0083f778", 00:14:31.675 "is_configured": true, 00:14:31.675 "data_offset": 2048, 00:14:31.675 "data_size": 63488 00:14:31.675 }, 00:14:31.675 { 00:14:31.675 "name": "BaseBdev3", 00:14:31.675 "uuid": "3d6fe688-5dba-5835-88bf-69f7899d0546", 00:14:31.675 "is_configured": true, 00:14:31.675 "data_offset": 2048, 00:14:31.675 "data_size": 63488 00:14:31.675 }, 00:14:31.675 { 00:14:31.675 "name": "BaseBdev4", 00:14:31.675 "uuid": "015bda10-6fb6-5581-9011-150dd33c67b7", 00:14:31.675 "is_configured": true, 00:14:31.675 "data_offset": 2048, 00:14:31.675 "data_size": 63488 00:14:31.675 } 00:14:31.675 ] 00:14:31.675 }' 00:14:31.675 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.675 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.239 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:32.239 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.239 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.239 [2024-11-27 08:46:28.785219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.239 [2024-11-27 08:46:28.785266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.239 [2024-11-27 08:46:28.788768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.239 [2024-11-27 08:46:28.788860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.239 [2024-11-27 08:46:28.788924] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.239 [2024-11-27 08:46:28.788945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:32.239 { 00:14:32.239 "results": [ 00:14:32.239 { 00:14:32.239 "job": "raid_bdev1", 00:14:32.239 "core_mask": "0x1", 00:14:32.239 "workload": "randrw", 00:14:32.239 "percentage": 50, 00:14:32.239 "status": "finished", 00:14:32.239 "queue_depth": 1, 00:14:32.239 "io_size": 131072, 00:14:32.239 "runtime": 1.40747, 00:14:32.239 "iops": 10019.39650578698, 00:14:32.239 "mibps": 1252.4245632233724, 00:14:32.239 "io_failed": 1, 00:14:32.239 "io_timeout": 0, 00:14:32.239 "avg_latency_us": 140.5274864793435, 00:14:32.239 "min_latency_us": 38.63272727272727, 00:14:32.239 "max_latency_us": 1861.8181818181818 00:14:32.239 } 00:14:32.239 ], 00:14:32.239 "core_count": 1 00:14:32.239 } 00:14:32.239 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.239 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73181 00:14:32.239 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' -z 73181 ']' 00:14:32.239 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # kill -0 73181 00:14:32.239 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # uname 00:14:32.239 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:14:32.239 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 73181 00:14:32.239 killing process with pid 73181 00:14:32.239 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:14:32.239 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:14:32.239 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 73181' 00:14:32.239 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # kill 73181 00:14:32.239 [2024-11-27 08:46:28.827178] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.239 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@975 -- # wait 73181 00:14:32.497 [2024-11-27 08:46:29.120551] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.870 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:33.870 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.er4efs5CMl 00:14:33.870 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:33.870 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:33.870 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:33.870 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:33.870 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:33.870 ************************************ 00:14:33.870 END TEST raid_read_error_test 00:14:33.870 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:33.870 00:14:33.870 real 0m4.978s 00:14:33.870 user 0m6.059s 00:14:33.870 sys 0m0.677s 00:14:33.870 08:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:14:33.870 08:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.870 ************************************ 00:14:33.870 08:46:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:14:33.870 08:46:30 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:14:33.870 08:46:30 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:14:33.870 08:46:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.870 ************************************ 00:14:33.870 START TEST raid_write_error_test 00:14:33.870 ************************************ 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test concat 4 write 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5ima8Sq3RA 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73327 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73327 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # '[' -z 73327 ']' 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:14:33.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:14:33.870 08:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.870 [2024-11-27 08:46:30.463679] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:14:33.870 [2024-11-27 08:46:30.463868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73327 ] 00:14:34.128 [2024-11-27 08:46:30.652552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.128 [2024-11-27 08:46:30.796835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.386 [2024-11-27 08:46:31.015586] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.387 [2024-11-27 08:46:31.015675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@865 -- # return 0 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.955 BaseBdev1_malloc 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.955 true 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.955 [2024-11-27 08:46:31.489948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:34.955 [2024-11-27 08:46:31.490031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.955 [2024-11-27 08:46:31.490060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:34.955 [2024-11-27 08:46:31.490077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.955 [2024-11-27 08:46:31.493024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.955 [2024-11-27 08:46:31.493069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:34.955 BaseBdev1 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.955 BaseBdev2_malloc 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.955 true 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.955 [2024-11-27 08:46:31.547970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:34.955 [2024-11-27 08:46:31.548036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.955 [2024-11-27 08:46:31.548060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:34.955 [2024-11-27 08:46:31.548077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.955 [2024-11-27 08:46:31.551052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.955 [2024-11-27 08:46:31.551263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:34.955 BaseBdev2 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.955 BaseBdev3_malloc 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.955 true 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.955 [2024-11-27 08:46:31.628537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:34.955 [2024-11-27 08:46:31.628603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.955 [2024-11-27 08:46:31.628630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:34.955 [2024-11-27 08:46:31.628662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.955 [2024-11-27 08:46:31.631631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.955 [2024-11-27 08:46:31.631678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:34.955 BaseBdev3 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.955 BaseBdev4_malloc 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.955 true 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:34.955 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.956 [2024-11-27 08:46:31.691112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:34.956 [2024-11-27 08:46:31.691372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.956 [2024-11-27 08:46:31.691412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:34.956 [2024-11-27 08:46:31.691432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.956 [2024-11-27 08:46:31.694419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.956 [2024-11-27 08:46:31.694471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:34.956 BaseBdev4 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.956 [2024-11-27 08:46:31.699307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.956 [2024-11-27 08:46:31.701926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.956 [2024-11-27 08:46:31.702031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:34.956 [2024-11-27 08:46:31.702143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:34.956 [2024-11-27 08:46:31.702475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:34.956 [2024-11-27 08:46:31.702498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:34.956 [2024-11-27 08:46:31.702839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:34.956 [2024-11-27 08:46:31.703044] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:34.956 [2024-11-27 08:46:31.703061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:34.956 [2024-11-27 08:46:31.703304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.956 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.214 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.214 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.214 "name": "raid_bdev1", 00:14:35.214 "uuid": "d167b1b3-6ee6-4e04-be41-5b98fea72f18", 00:14:35.214 "strip_size_kb": 64, 00:14:35.214 "state": "online", 00:14:35.214 "raid_level": "concat", 00:14:35.214 "superblock": true, 00:14:35.214 "num_base_bdevs": 4, 00:14:35.214 "num_base_bdevs_discovered": 4, 00:14:35.214 "num_base_bdevs_operational": 4, 00:14:35.214 "base_bdevs_list": [ 00:14:35.214 { 00:14:35.214 "name": "BaseBdev1", 00:14:35.214 "uuid": "c83d3c61-dffb-5f3c-9736-b06df676472f", 00:14:35.214 "is_configured": true, 00:14:35.214 "data_offset": 2048, 00:14:35.214 "data_size": 63488 00:14:35.214 }, 00:14:35.214 { 00:14:35.214 "name": "BaseBdev2", 00:14:35.214 "uuid": "c1984cda-b6f6-53f2-8227-366d9ba7f6f0", 00:14:35.214 "is_configured": true, 00:14:35.214 "data_offset": 2048, 00:14:35.214 "data_size": 63488 00:14:35.214 }, 00:14:35.214 { 00:14:35.214 "name": "BaseBdev3", 00:14:35.214 "uuid": "749cc7e4-af31-5084-8d7d-a9737800d980", 00:14:35.214 "is_configured": true, 00:14:35.214 "data_offset": 2048, 00:14:35.214 "data_size": 63488 00:14:35.214 }, 00:14:35.214 { 00:14:35.214 "name": "BaseBdev4", 00:14:35.214 "uuid": "70161f8a-2f0a-59e1-af6d-c302a1e38409", 00:14:35.214 "is_configured": true, 00:14:35.214 "data_offset": 2048, 00:14:35.214 "data_size": 63488 00:14:35.214 } 00:14:35.214 ] 00:14:35.214 }' 00:14:35.214 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.214 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.474 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:35.474 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:35.732 [2024-11-27 08:46:32.357070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.668 "name": "raid_bdev1", 00:14:36.668 "uuid": "d167b1b3-6ee6-4e04-be41-5b98fea72f18", 00:14:36.668 "strip_size_kb": 64, 00:14:36.668 "state": "online", 00:14:36.668 "raid_level": "concat", 00:14:36.668 "superblock": true, 00:14:36.668 "num_base_bdevs": 4, 00:14:36.668 "num_base_bdevs_discovered": 4, 00:14:36.668 "num_base_bdevs_operational": 4, 00:14:36.668 "base_bdevs_list": [ 00:14:36.668 { 00:14:36.668 "name": "BaseBdev1", 00:14:36.668 "uuid": "c83d3c61-dffb-5f3c-9736-b06df676472f", 00:14:36.668 "is_configured": true, 00:14:36.668 "data_offset": 2048, 00:14:36.668 "data_size": 63488 00:14:36.668 }, 00:14:36.668 { 00:14:36.668 "name": "BaseBdev2", 00:14:36.668 "uuid": "c1984cda-b6f6-53f2-8227-366d9ba7f6f0", 00:14:36.668 "is_configured": true, 00:14:36.668 "data_offset": 2048, 00:14:36.668 "data_size": 63488 00:14:36.668 }, 00:14:36.668 { 00:14:36.668 "name": "BaseBdev3", 00:14:36.668 "uuid": "749cc7e4-af31-5084-8d7d-a9737800d980", 00:14:36.668 "is_configured": true, 00:14:36.668 "data_offset": 2048, 00:14:36.668 "data_size": 63488 00:14:36.668 }, 00:14:36.668 { 00:14:36.668 "name": "BaseBdev4", 00:14:36.668 "uuid": "70161f8a-2f0a-59e1-af6d-c302a1e38409", 00:14:36.668 "is_configured": true, 00:14:36.668 "data_offset": 2048, 00:14:36.668 "data_size": 63488 00:14:36.668 } 00:14:36.668 ] 00:14:36.668 }' 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.668 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.235 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:37.235 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.235 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.235 [2024-11-27 08:46:33.764929] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.235 [2024-11-27 08:46:33.764972] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.235 [2024-11-27 08:46:33.768494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.235 [2024-11-27 08:46:33.768575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.235 [2024-11-27 08:46:33.768641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.235 [2024-11-27 08:46:33.768664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:37.235 { 00:14:37.235 "results": [ 00:14:37.235 { 00:14:37.236 "job": "raid_bdev1", 00:14:37.236 "core_mask": "0x1", 00:14:37.236 "workload": "randrw", 00:14:37.236 "percentage": 50, 00:14:37.236 "status": "finished", 00:14:37.236 "queue_depth": 1, 00:14:37.236 "io_size": 131072, 00:14:37.236 "runtime": 1.405128, 00:14:37.236 "iops": 10186.972290068947, 00:14:37.236 "mibps": 1273.3715362586183, 00:14:37.236 "io_failed": 1, 00:14:37.236 "io_timeout": 0, 00:14:37.236 "avg_latency_us": 138.18247254945544, 00:14:37.236 "min_latency_us": 38.63272727272727, 00:14:37.236 "max_latency_us": 1951.1854545454546 00:14:37.236 } 00:14:37.236 ], 00:14:37.236 "core_count": 1 00:14:37.236 } 00:14:37.236 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.236 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73327 00:14:37.236 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' -z 73327 ']' 00:14:37.236 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # kill -0 73327 00:14:37.236 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # uname 00:14:37.236 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:14:37.236 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 73327 00:14:37.236 killing process with pid 73327 00:14:37.236 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:14:37.236 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:14:37.236 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 73327' 00:14:37.236 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # kill 73327 00:14:37.236 [2024-11-27 08:46:33.804547] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:37.236 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@975 -- # wait 73327 00:14:37.494 [2024-11-27 08:46:34.103702] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:38.913 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:38.913 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5ima8Sq3RA 00:14:38.913 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:38.913 ************************************ 00:14:38.913 END TEST raid_write_error_test 00:14:38.913 ************************************ 00:14:38.913 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:38.913 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:38.913 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:38.913 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:38.913 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:38.913 00:14:38.913 real 0m4.922s 00:14:38.913 user 0m5.977s 00:14:38.913 sys 0m0.685s 00:14:38.913 08:46:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:14:38.913 08:46:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.913 08:46:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:38.913 08:46:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:14:38.913 08:46:35 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:14:38.913 08:46:35 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:14:38.913 08:46:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:38.913 ************************************ 00:14:38.913 START TEST raid_state_function_test 00:14:38.913 ************************************ 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # raid_state_function_test raid1 4 false 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:38.913 Process raid pid: 73476 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73476 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73476' 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73476 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # '[' -z 73476 ']' 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:14:38.913 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.913 [2024-11-27 08:46:35.427836] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:14:38.913 [2024-11-27 08:46:35.428281] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.913 [2024-11-27 08:46:35.617952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.172 [2024-11-27 08:46:35.766060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.431 [2024-11-27 08:46:35.989324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.431 [2024-11-27 08:46:35.989399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.689 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:14:39.689 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@865 -- # return 0 00:14:39.689 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:39.689 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.689 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.689 [2024-11-27 08:46:36.396707] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:39.689 [2024-11-27 08:46:36.396775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:39.689 [2024-11-27 08:46:36.396793] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:39.689 [2024-11-27 08:46:36.396816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:39.690 [2024-11-27 08:46:36.396826] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:39.690 [2024-11-27 08:46:36.396841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:39.690 [2024-11-27 08:46:36.396851] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:39.690 [2024-11-27 08:46:36.396866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.690 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.948 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.948 "name": "Existed_Raid", 00:14:39.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.948 "strip_size_kb": 0, 00:14:39.948 "state": "configuring", 00:14:39.948 "raid_level": "raid1", 00:14:39.948 "superblock": false, 00:14:39.948 "num_base_bdevs": 4, 00:14:39.948 "num_base_bdevs_discovered": 0, 00:14:39.948 "num_base_bdevs_operational": 4, 00:14:39.948 "base_bdevs_list": [ 00:14:39.948 { 00:14:39.948 "name": "BaseBdev1", 00:14:39.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.948 "is_configured": false, 00:14:39.948 "data_offset": 0, 00:14:39.948 "data_size": 0 00:14:39.948 }, 00:14:39.948 { 00:14:39.948 "name": "BaseBdev2", 00:14:39.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.948 "is_configured": false, 00:14:39.948 "data_offset": 0, 00:14:39.948 "data_size": 0 00:14:39.948 }, 00:14:39.948 { 00:14:39.948 "name": "BaseBdev3", 00:14:39.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.948 "is_configured": false, 00:14:39.948 "data_offset": 0, 00:14:39.948 "data_size": 0 00:14:39.948 }, 00:14:39.948 { 00:14:39.948 "name": "BaseBdev4", 00:14:39.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.948 "is_configured": false, 00:14:39.948 "data_offset": 0, 00:14:39.948 "data_size": 0 00:14:39.948 } 00:14:39.948 ] 00:14:39.948 }' 00:14:39.948 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.948 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.207 [2024-11-27 08:46:36.888815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:40.207 [2024-11-27 08:46:36.888867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.207 [2024-11-27 08:46:36.900769] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.207 [2024-11-27 08:46:36.900961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.207 [2024-11-27 08:46:36.901083] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:40.207 [2024-11-27 08:46:36.901219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:40.207 [2024-11-27 08:46:36.901328] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:40.207 [2024-11-27 08:46:36.901485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:40.207 [2024-11-27 08:46:36.901591] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:40.207 [2024-11-27 08:46:36.901710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.207 [2024-11-27 08:46:36.954380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.207 BaseBdev1 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.207 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.467 [ 00:14:40.467 { 00:14:40.467 "name": "BaseBdev1", 00:14:40.467 "aliases": [ 00:14:40.467 "1b27275b-1fbb-4b7d-8443-a45d983689ed" 00:14:40.467 ], 00:14:40.467 "product_name": "Malloc disk", 00:14:40.467 "block_size": 512, 00:14:40.467 "num_blocks": 65536, 00:14:40.467 "uuid": "1b27275b-1fbb-4b7d-8443-a45d983689ed", 00:14:40.467 "assigned_rate_limits": { 00:14:40.467 "rw_ios_per_sec": 0, 00:14:40.467 "rw_mbytes_per_sec": 0, 00:14:40.467 "r_mbytes_per_sec": 0, 00:14:40.467 "w_mbytes_per_sec": 0 00:14:40.467 }, 00:14:40.467 "claimed": true, 00:14:40.467 "claim_type": "exclusive_write", 00:14:40.467 "zoned": false, 00:14:40.467 "supported_io_types": { 00:14:40.467 "read": true, 00:14:40.467 "write": true, 00:14:40.467 "unmap": true, 00:14:40.467 "flush": true, 00:14:40.467 "reset": true, 00:14:40.467 "nvme_admin": false, 00:14:40.467 "nvme_io": false, 00:14:40.467 "nvme_io_md": false, 00:14:40.467 "write_zeroes": true, 00:14:40.467 "zcopy": true, 00:14:40.467 "get_zone_info": false, 00:14:40.467 "zone_management": false, 00:14:40.467 "zone_append": false, 00:14:40.467 "compare": false, 00:14:40.467 "compare_and_write": false, 00:14:40.467 "abort": true, 00:14:40.467 "seek_hole": false, 00:14:40.467 "seek_data": false, 00:14:40.467 "copy": true, 00:14:40.467 "nvme_iov_md": false 00:14:40.467 }, 00:14:40.467 "memory_domains": [ 00:14:40.467 { 00:14:40.467 "dma_device_id": "system", 00:14:40.467 "dma_device_type": 1 00:14:40.467 }, 00:14:40.467 { 00:14:40.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.467 "dma_device_type": 2 00:14:40.467 } 00:14:40.467 ], 00:14:40.467 "driver_specific": {} 00:14:40.467 } 00:14:40.467 ] 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.467 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.467 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.467 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.467 "name": "Existed_Raid", 00:14:40.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.467 "strip_size_kb": 0, 00:14:40.467 "state": "configuring", 00:14:40.467 "raid_level": "raid1", 00:14:40.467 "superblock": false, 00:14:40.467 "num_base_bdevs": 4, 00:14:40.467 "num_base_bdevs_discovered": 1, 00:14:40.467 "num_base_bdevs_operational": 4, 00:14:40.467 "base_bdevs_list": [ 00:14:40.467 { 00:14:40.467 "name": "BaseBdev1", 00:14:40.467 "uuid": "1b27275b-1fbb-4b7d-8443-a45d983689ed", 00:14:40.467 "is_configured": true, 00:14:40.467 "data_offset": 0, 00:14:40.467 "data_size": 65536 00:14:40.467 }, 00:14:40.467 { 00:14:40.467 "name": "BaseBdev2", 00:14:40.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.467 "is_configured": false, 00:14:40.467 "data_offset": 0, 00:14:40.467 "data_size": 0 00:14:40.467 }, 00:14:40.467 { 00:14:40.467 "name": "BaseBdev3", 00:14:40.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.467 "is_configured": false, 00:14:40.467 "data_offset": 0, 00:14:40.467 "data_size": 0 00:14:40.467 }, 00:14:40.467 { 00:14:40.467 "name": "BaseBdev4", 00:14:40.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.467 "is_configured": false, 00:14:40.467 "data_offset": 0, 00:14:40.467 "data_size": 0 00:14:40.467 } 00:14:40.467 ] 00:14:40.467 }' 00:14:40.467 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.467 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.035 [2024-11-27 08:46:37.510603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:41.035 [2024-11-27 08:46:37.510808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.035 [2024-11-27 08:46:37.518633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.035 [2024-11-27 08:46:37.521302] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.035 [2024-11-27 08:46:37.521362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.035 [2024-11-27 08:46:37.521380] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:41.035 [2024-11-27 08:46:37.521398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:41.035 [2024-11-27 08:46:37.521409] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:41.035 [2024-11-27 08:46:37.521432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.035 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.036 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.036 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.036 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.036 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.036 "name": "Existed_Raid", 00:14:41.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.036 "strip_size_kb": 0, 00:14:41.036 "state": "configuring", 00:14:41.036 "raid_level": "raid1", 00:14:41.036 "superblock": false, 00:14:41.036 "num_base_bdevs": 4, 00:14:41.036 "num_base_bdevs_discovered": 1, 00:14:41.036 "num_base_bdevs_operational": 4, 00:14:41.036 "base_bdevs_list": [ 00:14:41.036 { 00:14:41.036 "name": "BaseBdev1", 00:14:41.036 "uuid": "1b27275b-1fbb-4b7d-8443-a45d983689ed", 00:14:41.036 "is_configured": true, 00:14:41.036 "data_offset": 0, 00:14:41.036 "data_size": 65536 00:14:41.036 }, 00:14:41.036 { 00:14:41.036 "name": "BaseBdev2", 00:14:41.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.036 "is_configured": false, 00:14:41.036 "data_offset": 0, 00:14:41.036 "data_size": 0 00:14:41.036 }, 00:14:41.036 { 00:14:41.036 "name": "BaseBdev3", 00:14:41.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.036 "is_configured": false, 00:14:41.036 "data_offset": 0, 00:14:41.036 "data_size": 0 00:14:41.036 }, 00:14:41.036 { 00:14:41.036 "name": "BaseBdev4", 00:14:41.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.036 "is_configured": false, 00:14:41.036 "data_offset": 0, 00:14:41.036 "data_size": 0 00:14:41.036 } 00:14:41.036 ] 00:14:41.036 }' 00:14:41.036 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.036 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.604 [2024-11-27 08:46:38.125137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:41.604 BaseBdev2 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:41.604 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.605 [ 00:14:41.605 { 00:14:41.605 "name": "BaseBdev2", 00:14:41.605 "aliases": [ 00:14:41.605 "e23836c5-fc40-4389-9903-772c00408fc3" 00:14:41.605 ], 00:14:41.605 "product_name": "Malloc disk", 00:14:41.605 "block_size": 512, 00:14:41.605 "num_blocks": 65536, 00:14:41.605 "uuid": "e23836c5-fc40-4389-9903-772c00408fc3", 00:14:41.605 "assigned_rate_limits": { 00:14:41.605 "rw_ios_per_sec": 0, 00:14:41.605 "rw_mbytes_per_sec": 0, 00:14:41.605 "r_mbytes_per_sec": 0, 00:14:41.605 "w_mbytes_per_sec": 0 00:14:41.605 }, 00:14:41.605 "claimed": true, 00:14:41.605 "claim_type": "exclusive_write", 00:14:41.605 "zoned": false, 00:14:41.605 "supported_io_types": { 00:14:41.605 "read": true, 00:14:41.605 "write": true, 00:14:41.605 "unmap": true, 00:14:41.605 "flush": true, 00:14:41.605 "reset": true, 00:14:41.605 "nvme_admin": false, 00:14:41.605 "nvme_io": false, 00:14:41.605 "nvme_io_md": false, 00:14:41.605 "write_zeroes": true, 00:14:41.605 "zcopy": true, 00:14:41.605 "get_zone_info": false, 00:14:41.605 "zone_management": false, 00:14:41.605 "zone_append": false, 00:14:41.605 "compare": false, 00:14:41.605 "compare_and_write": false, 00:14:41.605 "abort": true, 00:14:41.605 "seek_hole": false, 00:14:41.605 "seek_data": false, 00:14:41.605 "copy": true, 00:14:41.605 "nvme_iov_md": false 00:14:41.605 }, 00:14:41.605 "memory_domains": [ 00:14:41.605 { 00:14:41.605 "dma_device_id": "system", 00:14:41.605 "dma_device_type": 1 00:14:41.605 }, 00:14:41.605 { 00:14:41.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.605 "dma_device_type": 2 00:14:41.605 } 00:14:41.605 ], 00:14:41.605 "driver_specific": {} 00:14:41.605 } 00:14:41.605 ] 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.605 "name": "Existed_Raid", 00:14:41.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.605 "strip_size_kb": 0, 00:14:41.605 "state": "configuring", 00:14:41.605 "raid_level": "raid1", 00:14:41.605 "superblock": false, 00:14:41.605 "num_base_bdevs": 4, 00:14:41.605 "num_base_bdevs_discovered": 2, 00:14:41.605 "num_base_bdevs_operational": 4, 00:14:41.605 "base_bdevs_list": [ 00:14:41.605 { 00:14:41.605 "name": "BaseBdev1", 00:14:41.605 "uuid": "1b27275b-1fbb-4b7d-8443-a45d983689ed", 00:14:41.605 "is_configured": true, 00:14:41.605 "data_offset": 0, 00:14:41.605 "data_size": 65536 00:14:41.605 }, 00:14:41.605 { 00:14:41.605 "name": "BaseBdev2", 00:14:41.605 "uuid": "e23836c5-fc40-4389-9903-772c00408fc3", 00:14:41.605 "is_configured": true, 00:14:41.605 "data_offset": 0, 00:14:41.605 "data_size": 65536 00:14:41.605 }, 00:14:41.605 { 00:14:41.605 "name": "BaseBdev3", 00:14:41.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.605 "is_configured": false, 00:14:41.605 "data_offset": 0, 00:14:41.605 "data_size": 0 00:14:41.605 }, 00:14:41.605 { 00:14:41.605 "name": "BaseBdev4", 00:14:41.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.605 "is_configured": false, 00:14:41.605 "data_offset": 0, 00:14:41.605 "data_size": 0 00:14:41.605 } 00:14:41.605 ] 00:14:41.605 }' 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.605 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.172 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:42.172 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.172 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.172 [2024-11-27 08:46:38.677117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:42.172 BaseBdev3 00:14:42.172 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.172 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:42.172 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:14:42.172 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:42.172 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:42.172 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:42.172 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:42.172 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.173 [ 00:14:42.173 { 00:14:42.173 "name": "BaseBdev3", 00:14:42.173 "aliases": [ 00:14:42.173 "ce6b15cb-51bf-439d-80a2-cedbb168046a" 00:14:42.173 ], 00:14:42.173 "product_name": "Malloc disk", 00:14:42.173 "block_size": 512, 00:14:42.173 "num_blocks": 65536, 00:14:42.173 "uuid": "ce6b15cb-51bf-439d-80a2-cedbb168046a", 00:14:42.173 "assigned_rate_limits": { 00:14:42.173 "rw_ios_per_sec": 0, 00:14:42.173 "rw_mbytes_per_sec": 0, 00:14:42.173 "r_mbytes_per_sec": 0, 00:14:42.173 "w_mbytes_per_sec": 0 00:14:42.173 }, 00:14:42.173 "claimed": true, 00:14:42.173 "claim_type": "exclusive_write", 00:14:42.173 "zoned": false, 00:14:42.173 "supported_io_types": { 00:14:42.173 "read": true, 00:14:42.173 "write": true, 00:14:42.173 "unmap": true, 00:14:42.173 "flush": true, 00:14:42.173 "reset": true, 00:14:42.173 "nvme_admin": false, 00:14:42.173 "nvme_io": false, 00:14:42.173 "nvme_io_md": false, 00:14:42.173 "write_zeroes": true, 00:14:42.173 "zcopy": true, 00:14:42.173 "get_zone_info": false, 00:14:42.173 "zone_management": false, 00:14:42.173 "zone_append": false, 00:14:42.173 "compare": false, 00:14:42.173 "compare_and_write": false, 00:14:42.173 "abort": true, 00:14:42.173 "seek_hole": false, 00:14:42.173 "seek_data": false, 00:14:42.173 "copy": true, 00:14:42.173 "nvme_iov_md": false 00:14:42.173 }, 00:14:42.173 "memory_domains": [ 00:14:42.173 { 00:14:42.173 "dma_device_id": "system", 00:14:42.173 "dma_device_type": 1 00:14:42.173 }, 00:14:42.173 { 00:14:42.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.173 "dma_device_type": 2 00:14:42.173 } 00:14:42.173 ], 00:14:42.173 "driver_specific": {} 00:14:42.173 } 00:14:42.173 ] 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.173 "name": "Existed_Raid", 00:14:42.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.173 "strip_size_kb": 0, 00:14:42.173 "state": "configuring", 00:14:42.173 "raid_level": "raid1", 00:14:42.173 "superblock": false, 00:14:42.173 "num_base_bdevs": 4, 00:14:42.173 "num_base_bdevs_discovered": 3, 00:14:42.173 "num_base_bdevs_operational": 4, 00:14:42.173 "base_bdevs_list": [ 00:14:42.173 { 00:14:42.173 "name": "BaseBdev1", 00:14:42.173 "uuid": "1b27275b-1fbb-4b7d-8443-a45d983689ed", 00:14:42.173 "is_configured": true, 00:14:42.173 "data_offset": 0, 00:14:42.173 "data_size": 65536 00:14:42.173 }, 00:14:42.173 { 00:14:42.173 "name": "BaseBdev2", 00:14:42.173 "uuid": "e23836c5-fc40-4389-9903-772c00408fc3", 00:14:42.173 "is_configured": true, 00:14:42.173 "data_offset": 0, 00:14:42.173 "data_size": 65536 00:14:42.173 }, 00:14:42.173 { 00:14:42.173 "name": "BaseBdev3", 00:14:42.173 "uuid": "ce6b15cb-51bf-439d-80a2-cedbb168046a", 00:14:42.173 "is_configured": true, 00:14:42.173 "data_offset": 0, 00:14:42.173 "data_size": 65536 00:14:42.173 }, 00:14:42.173 { 00:14:42.173 "name": "BaseBdev4", 00:14:42.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.173 "is_configured": false, 00:14:42.173 "data_offset": 0, 00:14:42.173 "data_size": 0 00:14:42.173 } 00:14:42.173 ] 00:14:42.173 }' 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.173 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.740 [2024-11-27 08:46:39.239087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:42.740 [2024-11-27 08:46:39.239166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:42.740 [2024-11-27 08:46:39.239180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:42.740 [2024-11-27 08:46:39.239615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:42.740 [2024-11-27 08:46:39.239853] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:42.740 [2024-11-27 08:46:39.239877] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:42.740 [2024-11-27 08:46:39.240214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.740 BaseBdev4 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.740 [ 00:14:42.740 { 00:14:42.740 "name": "BaseBdev4", 00:14:42.740 "aliases": [ 00:14:42.740 "4f1a9f64-86c2-447b-b7b6-c6a4ffb33ec7" 00:14:42.740 ], 00:14:42.740 "product_name": "Malloc disk", 00:14:42.740 "block_size": 512, 00:14:42.740 "num_blocks": 65536, 00:14:42.740 "uuid": "4f1a9f64-86c2-447b-b7b6-c6a4ffb33ec7", 00:14:42.740 "assigned_rate_limits": { 00:14:42.740 "rw_ios_per_sec": 0, 00:14:42.740 "rw_mbytes_per_sec": 0, 00:14:42.740 "r_mbytes_per_sec": 0, 00:14:42.740 "w_mbytes_per_sec": 0 00:14:42.740 }, 00:14:42.740 "claimed": true, 00:14:42.740 "claim_type": "exclusive_write", 00:14:42.740 "zoned": false, 00:14:42.740 "supported_io_types": { 00:14:42.740 "read": true, 00:14:42.740 "write": true, 00:14:42.740 "unmap": true, 00:14:42.740 "flush": true, 00:14:42.740 "reset": true, 00:14:42.740 "nvme_admin": false, 00:14:42.740 "nvme_io": false, 00:14:42.740 "nvme_io_md": false, 00:14:42.740 "write_zeroes": true, 00:14:42.740 "zcopy": true, 00:14:42.740 "get_zone_info": false, 00:14:42.740 "zone_management": false, 00:14:42.740 "zone_append": false, 00:14:42.740 "compare": false, 00:14:42.740 "compare_and_write": false, 00:14:42.740 "abort": true, 00:14:42.740 "seek_hole": false, 00:14:42.740 "seek_data": false, 00:14:42.740 "copy": true, 00:14:42.740 "nvme_iov_md": false 00:14:42.740 }, 00:14:42.740 "memory_domains": [ 00:14:42.740 { 00:14:42.740 "dma_device_id": "system", 00:14:42.740 "dma_device_type": 1 00:14:42.740 }, 00:14:42.740 { 00:14:42.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.740 "dma_device_type": 2 00:14:42.740 } 00:14:42.740 ], 00:14:42.740 "driver_specific": {} 00:14:42.740 } 00:14:42.740 ] 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.740 "name": "Existed_Raid", 00:14:42.740 "uuid": "fad24fd7-383e-41e8-8515-d7534bb9eb20", 00:14:42.740 "strip_size_kb": 0, 00:14:42.740 "state": "online", 00:14:42.740 "raid_level": "raid1", 00:14:42.740 "superblock": false, 00:14:42.740 "num_base_bdevs": 4, 00:14:42.740 "num_base_bdevs_discovered": 4, 00:14:42.740 "num_base_bdevs_operational": 4, 00:14:42.740 "base_bdevs_list": [ 00:14:42.740 { 00:14:42.740 "name": "BaseBdev1", 00:14:42.740 "uuid": "1b27275b-1fbb-4b7d-8443-a45d983689ed", 00:14:42.740 "is_configured": true, 00:14:42.740 "data_offset": 0, 00:14:42.740 "data_size": 65536 00:14:42.740 }, 00:14:42.740 { 00:14:42.740 "name": "BaseBdev2", 00:14:42.740 "uuid": "e23836c5-fc40-4389-9903-772c00408fc3", 00:14:42.740 "is_configured": true, 00:14:42.740 "data_offset": 0, 00:14:42.740 "data_size": 65536 00:14:42.740 }, 00:14:42.740 { 00:14:42.740 "name": "BaseBdev3", 00:14:42.740 "uuid": "ce6b15cb-51bf-439d-80a2-cedbb168046a", 00:14:42.740 "is_configured": true, 00:14:42.740 "data_offset": 0, 00:14:42.740 "data_size": 65536 00:14:42.740 }, 00:14:42.740 { 00:14:42.740 "name": "BaseBdev4", 00:14:42.740 "uuid": "4f1a9f64-86c2-447b-b7b6-c6a4ffb33ec7", 00:14:42.740 "is_configured": true, 00:14:42.740 "data_offset": 0, 00:14:42.740 "data_size": 65536 00:14:42.740 } 00:14:42.740 ] 00:14:42.740 }' 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.740 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.307 [2024-11-27 08:46:39.803788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:43.307 "name": "Existed_Raid", 00:14:43.307 "aliases": [ 00:14:43.307 "fad24fd7-383e-41e8-8515-d7534bb9eb20" 00:14:43.307 ], 00:14:43.307 "product_name": "Raid Volume", 00:14:43.307 "block_size": 512, 00:14:43.307 "num_blocks": 65536, 00:14:43.307 "uuid": "fad24fd7-383e-41e8-8515-d7534bb9eb20", 00:14:43.307 "assigned_rate_limits": { 00:14:43.307 "rw_ios_per_sec": 0, 00:14:43.307 "rw_mbytes_per_sec": 0, 00:14:43.307 "r_mbytes_per_sec": 0, 00:14:43.307 "w_mbytes_per_sec": 0 00:14:43.307 }, 00:14:43.307 "claimed": false, 00:14:43.307 "zoned": false, 00:14:43.307 "supported_io_types": { 00:14:43.307 "read": true, 00:14:43.307 "write": true, 00:14:43.307 "unmap": false, 00:14:43.307 "flush": false, 00:14:43.307 "reset": true, 00:14:43.307 "nvme_admin": false, 00:14:43.307 "nvme_io": false, 00:14:43.307 "nvme_io_md": false, 00:14:43.307 "write_zeroes": true, 00:14:43.307 "zcopy": false, 00:14:43.307 "get_zone_info": false, 00:14:43.307 "zone_management": false, 00:14:43.307 "zone_append": false, 00:14:43.307 "compare": false, 00:14:43.307 "compare_and_write": false, 00:14:43.307 "abort": false, 00:14:43.307 "seek_hole": false, 00:14:43.307 "seek_data": false, 00:14:43.307 "copy": false, 00:14:43.307 "nvme_iov_md": false 00:14:43.307 }, 00:14:43.307 "memory_domains": [ 00:14:43.307 { 00:14:43.307 "dma_device_id": "system", 00:14:43.307 "dma_device_type": 1 00:14:43.307 }, 00:14:43.307 { 00:14:43.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.307 "dma_device_type": 2 00:14:43.307 }, 00:14:43.307 { 00:14:43.307 "dma_device_id": "system", 00:14:43.307 "dma_device_type": 1 00:14:43.307 }, 00:14:43.307 { 00:14:43.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.307 "dma_device_type": 2 00:14:43.307 }, 00:14:43.307 { 00:14:43.307 "dma_device_id": "system", 00:14:43.307 "dma_device_type": 1 00:14:43.307 }, 00:14:43.307 { 00:14:43.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.307 "dma_device_type": 2 00:14:43.307 }, 00:14:43.307 { 00:14:43.307 "dma_device_id": "system", 00:14:43.307 "dma_device_type": 1 00:14:43.307 }, 00:14:43.307 { 00:14:43.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.307 "dma_device_type": 2 00:14:43.307 } 00:14:43.307 ], 00:14:43.307 "driver_specific": { 00:14:43.307 "raid": { 00:14:43.307 "uuid": "fad24fd7-383e-41e8-8515-d7534bb9eb20", 00:14:43.307 "strip_size_kb": 0, 00:14:43.307 "state": "online", 00:14:43.307 "raid_level": "raid1", 00:14:43.307 "superblock": false, 00:14:43.307 "num_base_bdevs": 4, 00:14:43.307 "num_base_bdevs_discovered": 4, 00:14:43.307 "num_base_bdevs_operational": 4, 00:14:43.307 "base_bdevs_list": [ 00:14:43.307 { 00:14:43.307 "name": "BaseBdev1", 00:14:43.307 "uuid": "1b27275b-1fbb-4b7d-8443-a45d983689ed", 00:14:43.307 "is_configured": true, 00:14:43.307 "data_offset": 0, 00:14:43.307 "data_size": 65536 00:14:43.307 }, 00:14:43.307 { 00:14:43.307 "name": "BaseBdev2", 00:14:43.307 "uuid": "e23836c5-fc40-4389-9903-772c00408fc3", 00:14:43.307 "is_configured": true, 00:14:43.307 "data_offset": 0, 00:14:43.307 "data_size": 65536 00:14:43.307 }, 00:14:43.307 { 00:14:43.307 "name": "BaseBdev3", 00:14:43.307 "uuid": "ce6b15cb-51bf-439d-80a2-cedbb168046a", 00:14:43.307 "is_configured": true, 00:14:43.307 "data_offset": 0, 00:14:43.307 "data_size": 65536 00:14:43.307 }, 00:14:43.307 { 00:14:43.307 "name": "BaseBdev4", 00:14:43.307 "uuid": "4f1a9f64-86c2-447b-b7b6-c6a4ffb33ec7", 00:14:43.307 "is_configured": true, 00:14:43.307 "data_offset": 0, 00:14:43.307 "data_size": 65536 00:14:43.307 } 00:14:43.307 ] 00:14:43.307 } 00:14:43.307 } 00:14:43.307 }' 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:43.307 BaseBdev2 00:14:43.307 BaseBdev3 00:14:43.307 BaseBdev4' 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.307 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.308 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.308 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.308 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.308 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:43.308 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.308 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.308 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.308 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.308 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.308 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.308 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.308 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:43.308 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.308 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.308 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.567 [2024-11-27 08:46:40.171549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.567 "name": "Existed_Raid", 00:14:43.567 "uuid": "fad24fd7-383e-41e8-8515-d7534bb9eb20", 00:14:43.567 "strip_size_kb": 0, 00:14:43.567 "state": "online", 00:14:43.567 "raid_level": "raid1", 00:14:43.567 "superblock": false, 00:14:43.567 "num_base_bdevs": 4, 00:14:43.567 "num_base_bdevs_discovered": 3, 00:14:43.567 "num_base_bdevs_operational": 3, 00:14:43.567 "base_bdevs_list": [ 00:14:43.567 { 00:14:43.567 "name": null, 00:14:43.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.567 "is_configured": false, 00:14:43.567 "data_offset": 0, 00:14:43.567 "data_size": 65536 00:14:43.567 }, 00:14:43.567 { 00:14:43.567 "name": "BaseBdev2", 00:14:43.567 "uuid": "e23836c5-fc40-4389-9903-772c00408fc3", 00:14:43.567 "is_configured": true, 00:14:43.567 "data_offset": 0, 00:14:43.567 "data_size": 65536 00:14:43.567 }, 00:14:43.567 { 00:14:43.567 "name": "BaseBdev3", 00:14:43.567 "uuid": "ce6b15cb-51bf-439d-80a2-cedbb168046a", 00:14:43.567 "is_configured": true, 00:14:43.567 "data_offset": 0, 00:14:43.567 "data_size": 65536 00:14:43.567 }, 00:14:43.567 { 00:14:43.567 "name": "BaseBdev4", 00:14:43.567 "uuid": "4f1a9f64-86c2-447b-b7b6-c6a4ffb33ec7", 00:14:43.567 "is_configured": true, 00:14:43.567 "data_offset": 0, 00:14:43.567 "data_size": 65536 00:14:43.567 } 00:14:43.567 ] 00:14:43.567 }' 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.567 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.134 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:44.134 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.134 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.134 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.134 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.134 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:44.134 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.134 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:44.134 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.134 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:44.134 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.134 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.134 [2024-11-27 08:46:40.835813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:44.393 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.393 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:44.393 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.393 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:44.393 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.393 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.393 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.393 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.393 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:44.393 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.393 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:44.393 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.393 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.393 [2024-11-27 08:46:41.001049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:44.393 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.393 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:44.393 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.393 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:44.393 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.393 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.393 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.393 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.393 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:44.393 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.652 [2024-11-27 08:46:41.157796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:44.652 [2024-11-27 08:46:41.157938] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.652 [2024-11-27 08:46:41.251135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.652 [2024-11-27 08:46:41.251516] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.652 [2024-11-27 08:46:41.251666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.652 BaseBdev2 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.652 [ 00:14:44.652 { 00:14:44.652 "name": "BaseBdev2", 00:14:44.652 "aliases": [ 00:14:44.652 "2a3dc74f-8cea-4b2f-b59c-5f8641d173f3" 00:14:44.652 ], 00:14:44.652 "product_name": "Malloc disk", 00:14:44.652 "block_size": 512, 00:14:44.652 "num_blocks": 65536, 00:14:44.652 "uuid": "2a3dc74f-8cea-4b2f-b59c-5f8641d173f3", 00:14:44.652 "assigned_rate_limits": { 00:14:44.652 "rw_ios_per_sec": 0, 00:14:44.652 "rw_mbytes_per_sec": 0, 00:14:44.652 "r_mbytes_per_sec": 0, 00:14:44.652 "w_mbytes_per_sec": 0 00:14:44.652 }, 00:14:44.652 "claimed": false, 00:14:44.652 "zoned": false, 00:14:44.652 "supported_io_types": { 00:14:44.652 "read": true, 00:14:44.652 "write": true, 00:14:44.652 "unmap": true, 00:14:44.652 "flush": true, 00:14:44.652 "reset": true, 00:14:44.652 "nvme_admin": false, 00:14:44.652 "nvme_io": false, 00:14:44.652 "nvme_io_md": false, 00:14:44.652 "write_zeroes": true, 00:14:44.652 "zcopy": true, 00:14:44.652 "get_zone_info": false, 00:14:44.652 "zone_management": false, 00:14:44.652 "zone_append": false, 00:14:44.652 "compare": false, 00:14:44.652 "compare_and_write": false, 00:14:44.652 "abort": true, 00:14:44.652 "seek_hole": false, 00:14:44.652 "seek_data": false, 00:14:44.652 "copy": true, 00:14:44.652 "nvme_iov_md": false 00:14:44.652 }, 00:14:44.652 "memory_domains": [ 00:14:44.652 { 00:14:44.652 "dma_device_id": "system", 00:14:44.652 "dma_device_type": 1 00:14:44.652 }, 00:14:44.652 { 00:14:44.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.652 "dma_device_type": 2 00:14:44.652 } 00:14:44.652 ], 00:14:44.652 "driver_specific": {} 00:14:44.652 } 00:14:44.652 ] 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.652 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.911 BaseBdev3 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.911 [ 00:14:44.911 { 00:14:44.911 "name": "BaseBdev3", 00:14:44.911 "aliases": [ 00:14:44.911 "e0c50d03-4a42-41f0-925b-851efdbc299b" 00:14:44.911 ], 00:14:44.911 "product_name": "Malloc disk", 00:14:44.911 "block_size": 512, 00:14:44.911 "num_blocks": 65536, 00:14:44.911 "uuid": "e0c50d03-4a42-41f0-925b-851efdbc299b", 00:14:44.911 "assigned_rate_limits": { 00:14:44.911 "rw_ios_per_sec": 0, 00:14:44.911 "rw_mbytes_per_sec": 0, 00:14:44.911 "r_mbytes_per_sec": 0, 00:14:44.911 "w_mbytes_per_sec": 0 00:14:44.911 }, 00:14:44.911 "claimed": false, 00:14:44.911 "zoned": false, 00:14:44.911 "supported_io_types": { 00:14:44.911 "read": true, 00:14:44.911 "write": true, 00:14:44.911 "unmap": true, 00:14:44.911 "flush": true, 00:14:44.911 "reset": true, 00:14:44.911 "nvme_admin": false, 00:14:44.911 "nvme_io": false, 00:14:44.911 "nvme_io_md": false, 00:14:44.911 "write_zeroes": true, 00:14:44.911 "zcopy": true, 00:14:44.911 "get_zone_info": false, 00:14:44.911 "zone_management": false, 00:14:44.911 "zone_append": false, 00:14:44.911 "compare": false, 00:14:44.911 "compare_and_write": false, 00:14:44.911 "abort": true, 00:14:44.911 "seek_hole": false, 00:14:44.911 "seek_data": false, 00:14:44.911 "copy": true, 00:14:44.911 "nvme_iov_md": false 00:14:44.911 }, 00:14:44.911 "memory_domains": [ 00:14:44.911 { 00:14:44.911 "dma_device_id": "system", 00:14:44.911 "dma_device_type": 1 00:14:44.911 }, 00:14:44.911 { 00:14:44.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.911 "dma_device_type": 2 00:14:44.911 } 00:14:44.911 ], 00:14:44.911 "driver_specific": {} 00:14:44.911 } 00:14:44.911 ] 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.911 BaseBdev4 00:14:44.911 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.912 [ 00:14:44.912 { 00:14:44.912 "name": "BaseBdev4", 00:14:44.912 "aliases": [ 00:14:44.912 "971b3a18-0908-4a77-98de-05bf23551398" 00:14:44.912 ], 00:14:44.912 "product_name": "Malloc disk", 00:14:44.912 "block_size": 512, 00:14:44.912 "num_blocks": 65536, 00:14:44.912 "uuid": "971b3a18-0908-4a77-98de-05bf23551398", 00:14:44.912 "assigned_rate_limits": { 00:14:44.912 "rw_ios_per_sec": 0, 00:14:44.912 "rw_mbytes_per_sec": 0, 00:14:44.912 "r_mbytes_per_sec": 0, 00:14:44.912 "w_mbytes_per_sec": 0 00:14:44.912 }, 00:14:44.912 "claimed": false, 00:14:44.912 "zoned": false, 00:14:44.912 "supported_io_types": { 00:14:44.912 "read": true, 00:14:44.912 "write": true, 00:14:44.912 "unmap": true, 00:14:44.912 "flush": true, 00:14:44.912 "reset": true, 00:14:44.912 "nvme_admin": false, 00:14:44.912 "nvme_io": false, 00:14:44.912 "nvme_io_md": false, 00:14:44.912 "write_zeroes": true, 00:14:44.912 "zcopy": true, 00:14:44.912 "get_zone_info": false, 00:14:44.912 "zone_management": false, 00:14:44.912 "zone_append": false, 00:14:44.912 "compare": false, 00:14:44.912 "compare_and_write": false, 00:14:44.912 "abort": true, 00:14:44.912 "seek_hole": false, 00:14:44.912 "seek_data": false, 00:14:44.912 "copy": true, 00:14:44.912 "nvme_iov_md": false 00:14:44.912 }, 00:14:44.912 "memory_domains": [ 00:14:44.912 { 00:14:44.912 "dma_device_id": "system", 00:14:44.912 "dma_device_type": 1 00:14:44.912 }, 00:14:44.912 { 00:14:44.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.912 "dma_device_type": 2 00:14:44.912 } 00:14:44.912 ], 00:14:44.912 "driver_specific": {} 00:14:44.912 } 00:14:44.912 ] 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.912 [2024-11-27 08:46:41.550762] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.912 [2024-11-27 08:46:41.550972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.912 [2024-11-27 08:46:41.551024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.912 [2024-11-27 08:46:41.553746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.912 [2024-11-27 08:46:41.553814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.912 "name": "Existed_Raid", 00:14:44.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.912 "strip_size_kb": 0, 00:14:44.912 "state": "configuring", 00:14:44.912 "raid_level": "raid1", 00:14:44.912 "superblock": false, 00:14:44.912 "num_base_bdevs": 4, 00:14:44.912 "num_base_bdevs_discovered": 3, 00:14:44.912 "num_base_bdevs_operational": 4, 00:14:44.912 "base_bdevs_list": [ 00:14:44.912 { 00:14:44.912 "name": "BaseBdev1", 00:14:44.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.912 "is_configured": false, 00:14:44.912 "data_offset": 0, 00:14:44.912 "data_size": 0 00:14:44.912 }, 00:14:44.912 { 00:14:44.912 "name": "BaseBdev2", 00:14:44.912 "uuid": "2a3dc74f-8cea-4b2f-b59c-5f8641d173f3", 00:14:44.912 "is_configured": true, 00:14:44.912 "data_offset": 0, 00:14:44.912 "data_size": 65536 00:14:44.912 }, 00:14:44.912 { 00:14:44.912 "name": "BaseBdev3", 00:14:44.912 "uuid": "e0c50d03-4a42-41f0-925b-851efdbc299b", 00:14:44.912 "is_configured": true, 00:14:44.912 "data_offset": 0, 00:14:44.912 "data_size": 65536 00:14:44.912 }, 00:14:44.912 { 00:14:44.912 "name": "BaseBdev4", 00:14:44.912 "uuid": "971b3a18-0908-4a77-98de-05bf23551398", 00:14:44.912 "is_configured": true, 00:14:44.912 "data_offset": 0, 00:14:44.912 "data_size": 65536 00:14:44.912 } 00:14:44.912 ] 00:14:44.912 }' 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.912 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.480 [2024-11-27 08:46:42.058883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.480 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.481 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.481 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.481 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.481 "name": "Existed_Raid", 00:14:45.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.481 "strip_size_kb": 0, 00:14:45.481 "state": "configuring", 00:14:45.481 "raid_level": "raid1", 00:14:45.481 "superblock": false, 00:14:45.481 "num_base_bdevs": 4, 00:14:45.481 "num_base_bdevs_discovered": 2, 00:14:45.481 "num_base_bdevs_operational": 4, 00:14:45.481 "base_bdevs_list": [ 00:14:45.481 { 00:14:45.481 "name": "BaseBdev1", 00:14:45.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.481 "is_configured": false, 00:14:45.481 "data_offset": 0, 00:14:45.481 "data_size": 0 00:14:45.481 }, 00:14:45.481 { 00:14:45.481 "name": null, 00:14:45.481 "uuid": "2a3dc74f-8cea-4b2f-b59c-5f8641d173f3", 00:14:45.481 "is_configured": false, 00:14:45.481 "data_offset": 0, 00:14:45.481 "data_size": 65536 00:14:45.481 }, 00:14:45.481 { 00:14:45.481 "name": "BaseBdev3", 00:14:45.481 "uuid": "e0c50d03-4a42-41f0-925b-851efdbc299b", 00:14:45.481 "is_configured": true, 00:14:45.481 "data_offset": 0, 00:14:45.481 "data_size": 65536 00:14:45.481 }, 00:14:45.481 { 00:14:45.481 "name": "BaseBdev4", 00:14:45.481 "uuid": "971b3a18-0908-4a77-98de-05bf23551398", 00:14:45.481 "is_configured": true, 00:14:45.481 "data_offset": 0, 00:14:45.481 "data_size": 65536 00:14:45.481 } 00:14:45.481 ] 00:14:45.481 }' 00:14:45.481 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.481 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.048 [2024-11-27 08:46:42.700421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.048 BaseBdev1 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.048 [ 00:14:46.048 { 00:14:46.048 "name": "BaseBdev1", 00:14:46.048 "aliases": [ 00:14:46.048 "91e4edf9-8577-45a5-a462-447aeb7a6113" 00:14:46.048 ], 00:14:46.048 "product_name": "Malloc disk", 00:14:46.048 "block_size": 512, 00:14:46.048 "num_blocks": 65536, 00:14:46.048 "uuid": "91e4edf9-8577-45a5-a462-447aeb7a6113", 00:14:46.048 "assigned_rate_limits": { 00:14:46.048 "rw_ios_per_sec": 0, 00:14:46.048 "rw_mbytes_per_sec": 0, 00:14:46.048 "r_mbytes_per_sec": 0, 00:14:46.048 "w_mbytes_per_sec": 0 00:14:46.048 }, 00:14:46.048 "claimed": true, 00:14:46.048 "claim_type": "exclusive_write", 00:14:46.048 "zoned": false, 00:14:46.048 "supported_io_types": { 00:14:46.048 "read": true, 00:14:46.048 "write": true, 00:14:46.048 "unmap": true, 00:14:46.048 "flush": true, 00:14:46.048 "reset": true, 00:14:46.048 "nvme_admin": false, 00:14:46.048 "nvme_io": false, 00:14:46.048 "nvme_io_md": false, 00:14:46.048 "write_zeroes": true, 00:14:46.048 "zcopy": true, 00:14:46.048 "get_zone_info": false, 00:14:46.048 "zone_management": false, 00:14:46.048 "zone_append": false, 00:14:46.048 "compare": false, 00:14:46.048 "compare_and_write": false, 00:14:46.048 "abort": true, 00:14:46.048 "seek_hole": false, 00:14:46.048 "seek_data": false, 00:14:46.048 "copy": true, 00:14:46.048 "nvme_iov_md": false 00:14:46.048 }, 00:14:46.048 "memory_domains": [ 00:14:46.048 { 00:14:46.048 "dma_device_id": "system", 00:14:46.048 "dma_device_type": 1 00:14:46.048 }, 00:14:46.048 { 00:14:46.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.048 "dma_device_type": 2 00:14:46.048 } 00:14:46.048 ], 00:14:46.048 "driver_specific": {} 00:14:46.048 } 00:14:46.048 ] 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.048 "name": "Existed_Raid", 00:14:46.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.048 "strip_size_kb": 0, 00:14:46.048 "state": "configuring", 00:14:46.048 "raid_level": "raid1", 00:14:46.048 "superblock": false, 00:14:46.048 "num_base_bdevs": 4, 00:14:46.048 "num_base_bdevs_discovered": 3, 00:14:46.048 "num_base_bdevs_operational": 4, 00:14:46.048 "base_bdevs_list": [ 00:14:46.048 { 00:14:46.048 "name": "BaseBdev1", 00:14:46.048 "uuid": "91e4edf9-8577-45a5-a462-447aeb7a6113", 00:14:46.048 "is_configured": true, 00:14:46.048 "data_offset": 0, 00:14:46.048 "data_size": 65536 00:14:46.048 }, 00:14:46.048 { 00:14:46.048 "name": null, 00:14:46.048 "uuid": "2a3dc74f-8cea-4b2f-b59c-5f8641d173f3", 00:14:46.048 "is_configured": false, 00:14:46.048 "data_offset": 0, 00:14:46.048 "data_size": 65536 00:14:46.048 }, 00:14:46.048 { 00:14:46.048 "name": "BaseBdev3", 00:14:46.048 "uuid": "e0c50d03-4a42-41f0-925b-851efdbc299b", 00:14:46.048 "is_configured": true, 00:14:46.048 "data_offset": 0, 00:14:46.048 "data_size": 65536 00:14:46.048 }, 00:14:46.048 { 00:14:46.048 "name": "BaseBdev4", 00:14:46.048 "uuid": "971b3a18-0908-4a77-98de-05bf23551398", 00:14:46.048 "is_configured": true, 00:14:46.048 "data_offset": 0, 00:14:46.048 "data_size": 65536 00:14:46.048 } 00:14:46.048 ] 00:14:46.048 }' 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.048 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.624 [2024-11-27 08:46:43.288679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.624 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.625 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.625 "name": "Existed_Raid", 00:14:46.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.625 "strip_size_kb": 0, 00:14:46.625 "state": "configuring", 00:14:46.625 "raid_level": "raid1", 00:14:46.625 "superblock": false, 00:14:46.625 "num_base_bdevs": 4, 00:14:46.625 "num_base_bdevs_discovered": 2, 00:14:46.625 "num_base_bdevs_operational": 4, 00:14:46.625 "base_bdevs_list": [ 00:14:46.625 { 00:14:46.625 "name": "BaseBdev1", 00:14:46.625 "uuid": "91e4edf9-8577-45a5-a462-447aeb7a6113", 00:14:46.625 "is_configured": true, 00:14:46.625 "data_offset": 0, 00:14:46.625 "data_size": 65536 00:14:46.625 }, 00:14:46.625 { 00:14:46.625 "name": null, 00:14:46.625 "uuid": "2a3dc74f-8cea-4b2f-b59c-5f8641d173f3", 00:14:46.625 "is_configured": false, 00:14:46.625 "data_offset": 0, 00:14:46.625 "data_size": 65536 00:14:46.625 }, 00:14:46.625 { 00:14:46.625 "name": null, 00:14:46.625 "uuid": "e0c50d03-4a42-41f0-925b-851efdbc299b", 00:14:46.625 "is_configured": false, 00:14:46.625 "data_offset": 0, 00:14:46.625 "data_size": 65536 00:14:46.625 }, 00:14:46.625 { 00:14:46.625 "name": "BaseBdev4", 00:14:46.625 "uuid": "971b3a18-0908-4a77-98de-05bf23551398", 00:14:46.625 "is_configured": true, 00:14:46.625 "data_offset": 0, 00:14:46.625 "data_size": 65536 00:14:46.625 } 00:14:46.625 ] 00:14:46.625 }' 00:14:46.625 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.625 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.192 [2024-11-27 08:46:43.868823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.192 "name": "Existed_Raid", 00:14:47.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.192 "strip_size_kb": 0, 00:14:47.192 "state": "configuring", 00:14:47.192 "raid_level": "raid1", 00:14:47.192 "superblock": false, 00:14:47.192 "num_base_bdevs": 4, 00:14:47.192 "num_base_bdevs_discovered": 3, 00:14:47.192 "num_base_bdevs_operational": 4, 00:14:47.192 "base_bdevs_list": [ 00:14:47.192 { 00:14:47.192 "name": "BaseBdev1", 00:14:47.192 "uuid": "91e4edf9-8577-45a5-a462-447aeb7a6113", 00:14:47.192 "is_configured": true, 00:14:47.192 "data_offset": 0, 00:14:47.192 "data_size": 65536 00:14:47.192 }, 00:14:47.192 { 00:14:47.192 "name": null, 00:14:47.192 "uuid": "2a3dc74f-8cea-4b2f-b59c-5f8641d173f3", 00:14:47.192 "is_configured": false, 00:14:47.192 "data_offset": 0, 00:14:47.192 "data_size": 65536 00:14:47.192 }, 00:14:47.192 { 00:14:47.192 "name": "BaseBdev3", 00:14:47.192 "uuid": "e0c50d03-4a42-41f0-925b-851efdbc299b", 00:14:47.192 "is_configured": true, 00:14:47.192 "data_offset": 0, 00:14:47.192 "data_size": 65536 00:14:47.192 }, 00:14:47.192 { 00:14:47.192 "name": "BaseBdev4", 00:14:47.192 "uuid": "971b3a18-0908-4a77-98de-05bf23551398", 00:14:47.192 "is_configured": true, 00:14:47.192 "data_offset": 0, 00:14:47.192 "data_size": 65536 00:14:47.192 } 00:14:47.192 ] 00:14:47.192 }' 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.192 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.760 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.760 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.760 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.760 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:47.760 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.760 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:47.760 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:47.760 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.760 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.760 [2024-11-27 08:46:44.452995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.019 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.019 "name": "Existed_Raid", 00:14:48.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.019 "strip_size_kb": 0, 00:14:48.019 "state": "configuring", 00:14:48.019 "raid_level": "raid1", 00:14:48.019 "superblock": false, 00:14:48.019 "num_base_bdevs": 4, 00:14:48.019 "num_base_bdevs_discovered": 2, 00:14:48.019 "num_base_bdevs_operational": 4, 00:14:48.019 "base_bdevs_list": [ 00:14:48.019 { 00:14:48.019 "name": null, 00:14:48.019 "uuid": "91e4edf9-8577-45a5-a462-447aeb7a6113", 00:14:48.019 "is_configured": false, 00:14:48.019 "data_offset": 0, 00:14:48.019 "data_size": 65536 00:14:48.019 }, 00:14:48.019 { 00:14:48.019 "name": null, 00:14:48.019 "uuid": "2a3dc74f-8cea-4b2f-b59c-5f8641d173f3", 00:14:48.019 "is_configured": false, 00:14:48.019 "data_offset": 0, 00:14:48.019 "data_size": 65536 00:14:48.019 }, 00:14:48.019 { 00:14:48.019 "name": "BaseBdev3", 00:14:48.019 "uuid": "e0c50d03-4a42-41f0-925b-851efdbc299b", 00:14:48.019 "is_configured": true, 00:14:48.019 "data_offset": 0, 00:14:48.019 "data_size": 65536 00:14:48.019 }, 00:14:48.019 { 00:14:48.019 "name": "BaseBdev4", 00:14:48.019 "uuid": "971b3a18-0908-4a77-98de-05bf23551398", 00:14:48.019 "is_configured": true, 00:14:48.020 "data_offset": 0, 00:14:48.020 "data_size": 65536 00:14:48.020 } 00:14:48.020 ] 00:14:48.020 }' 00:14:48.020 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.020 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.588 [2024-11-27 08:46:45.138362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.588 "name": "Existed_Raid", 00:14:48.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.588 "strip_size_kb": 0, 00:14:48.588 "state": "configuring", 00:14:48.588 "raid_level": "raid1", 00:14:48.588 "superblock": false, 00:14:48.588 "num_base_bdevs": 4, 00:14:48.588 "num_base_bdevs_discovered": 3, 00:14:48.588 "num_base_bdevs_operational": 4, 00:14:48.588 "base_bdevs_list": [ 00:14:48.588 { 00:14:48.588 "name": null, 00:14:48.588 "uuid": "91e4edf9-8577-45a5-a462-447aeb7a6113", 00:14:48.588 "is_configured": false, 00:14:48.588 "data_offset": 0, 00:14:48.588 "data_size": 65536 00:14:48.588 }, 00:14:48.588 { 00:14:48.588 "name": "BaseBdev2", 00:14:48.588 "uuid": "2a3dc74f-8cea-4b2f-b59c-5f8641d173f3", 00:14:48.588 "is_configured": true, 00:14:48.588 "data_offset": 0, 00:14:48.588 "data_size": 65536 00:14:48.588 }, 00:14:48.588 { 00:14:48.588 "name": "BaseBdev3", 00:14:48.588 "uuid": "e0c50d03-4a42-41f0-925b-851efdbc299b", 00:14:48.588 "is_configured": true, 00:14:48.588 "data_offset": 0, 00:14:48.588 "data_size": 65536 00:14:48.588 }, 00:14:48.588 { 00:14:48.588 "name": "BaseBdev4", 00:14:48.588 "uuid": "971b3a18-0908-4a77-98de-05bf23551398", 00:14:48.588 "is_configured": true, 00:14:48.588 "data_offset": 0, 00:14:48.588 "data_size": 65536 00:14:48.588 } 00:14:48.588 ] 00:14:48.588 }' 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.588 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 91e4edf9-8577-45a5-a462-447aeb7a6113 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.155 [2024-11-27 08:46:45.787487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:49.155 [2024-11-27 08:46:45.787770] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:49.155 [2024-11-27 08:46:45.787801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:49.155 [2024-11-27 08:46:45.788179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:49.155 [2024-11-27 08:46:45.788434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:49.155 [2024-11-27 08:46:45.788453] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:49.155 [2024-11-27 08:46:45.788782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.155 NewBaseBdev 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local i 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.155 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.155 [ 00:14:49.155 { 00:14:49.155 "name": "NewBaseBdev", 00:14:49.155 "aliases": [ 00:14:49.155 "91e4edf9-8577-45a5-a462-447aeb7a6113" 00:14:49.155 ], 00:14:49.155 "product_name": "Malloc disk", 00:14:49.155 "block_size": 512, 00:14:49.155 "num_blocks": 65536, 00:14:49.155 "uuid": "91e4edf9-8577-45a5-a462-447aeb7a6113", 00:14:49.155 "assigned_rate_limits": { 00:14:49.155 "rw_ios_per_sec": 0, 00:14:49.155 "rw_mbytes_per_sec": 0, 00:14:49.155 "r_mbytes_per_sec": 0, 00:14:49.155 "w_mbytes_per_sec": 0 00:14:49.155 }, 00:14:49.155 "claimed": true, 00:14:49.155 "claim_type": "exclusive_write", 00:14:49.155 "zoned": false, 00:14:49.155 "supported_io_types": { 00:14:49.155 "read": true, 00:14:49.155 "write": true, 00:14:49.155 "unmap": true, 00:14:49.155 "flush": true, 00:14:49.155 "reset": true, 00:14:49.155 "nvme_admin": false, 00:14:49.155 "nvme_io": false, 00:14:49.155 "nvme_io_md": false, 00:14:49.155 "write_zeroes": true, 00:14:49.155 "zcopy": true, 00:14:49.155 "get_zone_info": false, 00:14:49.155 "zone_management": false, 00:14:49.155 "zone_append": false, 00:14:49.155 "compare": false, 00:14:49.156 "compare_and_write": false, 00:14:49.156 "abort": true, 00:14:49.156 "seek_hole": false, 00:14:49.156 "seek_data": false, 00:14:49.156 "copy": true, 00:14:49.156 "nvme_iov_md": false 00:14:49.156 }, 00:14:49.156 "memory_domains": [ 00:14:49.156 { 00:14:49.156 "dma_device_id": "system", 00:14:49.156 "dma_device_type": 1 00:14:49.156 }, 00:14:49.156 { 00:14:49.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.156 "dma_device_type": 2 00:14:49.156 } 00:14:49.156 ], 00:14:49.156 "driver_specific": {} 00:14:49.156 } 00:14:49.156 ] 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.156 "name": "Existed_Raid", 00:14:49.156 "uuid": "da262179-77f9-4253-8284-4c16cbe84006", 00:14:49.156 "strip_size_kb": 0, 00:14:49.156 "state": "online", 00:14:49.156 "raid_level": "raid1", 00:14:49.156 "superblock": false, 00:14:49.156 "num_base_bdevs": 4, 00:14:49.156 "num_base_bdevs_discovered": 4, 00:14:49.156 "num_base_bdevs_operational": 4, 00:14:49.156 "base_bdevs_list": [ 00:14:49.156 { 00:14:49.156 "name": "NewBaseBdev", 00:14:49.156 "uuid": "91e4edf9-8577-45a5-a462-447aeb7a6113", 00:14:49.156 "is_configured": true, 00:14:49.156 "data_offset": 0, 00:14:49.156 "data_size": 65536 00:14:49.156 }, 00:14:49.156 { 00:14:49.156 "name": "BaseBdev2", 00:14:49.156 "uuid": "2a3dc74f-8cea-4b2f-b59c-5f8641d173f3", 00:14:49.156 "is_configured": true, 00:14:49.156 "data_offset": 0, 00:14:49.156 "data_size": 65536 00:14:49.156 }, 00:14:49.156 { 00:14:49.156 "name": "BaseBdev3", 00:14:49.156 "uuid": "e0c50d03-4a42-41f0-925b-851efdbc299b", 00:14:49.156 "is_configured": true, 00:14:49.156 "data_offset": 0, 00:14:49.156 "data_size": 65536 00:14:49.156 }, 00:14:49.156 { 00:14:49.156 "name": "BaseBdev4", 00:14:49.156 "uuid": "971b3a18-0908-4a77-98de-05bf23551398", 00:14:49.156 "is_configured": true, 00:14:49.156 "data_offset": 0, 00:14:49.156 "data_size": 65536 00:14:49.156 } 00:14:49.156 ] 00:14:49.156 }' 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.156 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.722 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:49.722 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:49.722 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:49.722 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:49.722 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:49.722 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:49.722 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:49.722 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.722 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.722 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:49.722 [2024-11-27 08:46:46.344146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.722 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.722 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:49.722 "name": "Existed_Raid", 00:14:49.722 "aliases": [ 00:14:49.722 "da262179-77f9-4253-8284-4c16cbe84006" 00:14:49.722 ], 00:14:49.722 "product_name": "Raid Volume", 00:14:49.722 "block_size": 512, 00:14:49.722 "num_blocks": 65536, 00:14:49.722 "uuid": "da262179-77f9-4253-8284-4c16cbe84006", 00:14:49.722 "assigned_rate_limits": { 00:14:49.722 "rw_ios_per_sec": 0, 00:14:49.722 "rw_mbytes_per_sec": 0, 00:14:49.722 "r_mbytes_per_sec": 0, 00:14:49.722 "w_mbytes_per_sec": 0 00:14:49.722 }, 00:14:49.722 "claimed": false, 00:14:49.722 "zoned": false, 00:14:49.722 "supported_io_types": { 00:14:49.722 "read": true, 00:14:49.722 "write": true, 00:14:49.722 "unmap": false, 00:14:49.722 "flush": false, 00:14:49.722 "reset": true, 00:14:49.722 "nvme_admin": false, 00:14:49.722 "nvme_io": false, 00:14:49.722 "nvme_io_md": false, 00:14:49.722 "write_zeroes": true, 00:14:49.722 "zcopy": false, 00:14:49.722 "get_zone_info": false, 00:14:49.722 "zone_management": false, 00:14:49.722 "zone_append": false, 00:14:49.722 "compare": false, 00:14:49.722 "compare_and_write": false, 00:14:49.722 "abort": false, 00:14:49.722 "seek_hole": false, 00:14:49.722 "seek_data": false, 00:14:49.722 "copy": false, 00:14:49.722 "nvme_iov_md": false 00:14:49.722 }, 00:14:49.722 "memory_domains": [ 00:14:49.722 { 00:14:49.722 "dma_device_id": "system", 00:14:49.722 "dma_device_type": 1 00:14:49.722 }, 00:14:49.722 { 00:14:49.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.722 "dma_device_type": 2 00:14:49.722 }, 00:14:49.722 { 00:14:49.722 "dma_device_id": "system", 00:14:49.722 "dma_device_type": 1 00:14:49.722 }, 00:14:49.722 { 00:14:49.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.722 "dma_device_type": 2 00:14:49.722 }, 00:14:49.722 { 00:14:49.722 "dma_device_id": "system", 00:14:49.722 "dma_device_type": 1 00:14:49.722 }, 00:14:49.722 { 00:14:49.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.722 "dma_device_type": 2 00:14:49.722 }, 00:14:49.722 { 00:14:49.722 "dma_device_id": "system", 00:14:49.722 "dma_device_type": 1 00:14:49.722 }, 00:14:49.722 { 00:14:49.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.722 "dma_device_type": 2 00:14:49.722 } 00:14:49.722 ], 00:14:49.722 "driver_specific": { 00:14:49.722 "raid": { 00:14:49.722 "uuid": "da262179-77f9-4253-8284-4c16cbe84006", 00:14:49.722 "strip_size_kb": 0, 00:14:49.722 "state": "online", 00:14:49.722 "raid_level": "raid1", 00:14:49.723 "superblock": false, 00:14:49.723 "num_base_bdevs": 4, 00:14:49.723 "num_base_bdevs_discovered": 4, 00:14:49.723 "num_base_bdevs_operational": 4, 00:14:49.723 "base_bdevs_list": [ 00:14:49.723 { 00:14:49.723 "name": "NewBaseBdev", 00:14:49.723 "uuid": "91e4edf9-8577-45a5-a462-447aeb7a6113", 00:14:49.723 "is_configured": true, 00:14:49.723 "data_offset": 0, 00:14:49.723 "data_size": 65536 00:14:49.723 }, 00:14:49.723 { 00:14:49.723 "name": "BaseBdev2", 00:14:49.723 "uuid": "2a3dc74f-8cea-4b2f-b59c-5f8641d173f3", 00:14:49.723 "is_configured": true, 00:14:49.723 "data_offset": 0, 00:14:49.723 "data_size": 65536 00:14:49.723 }, 00:14:49.723 { 00:14:49.723 "name": "BaseBdev3", 00:14:49.723 "uuid": "e0c50d03-4a42-41f0-925b-851efdbc299b", 00:14:49.723 "is_configured": true, 00:14:49.723 "data_offset": 0, 00:14:49.723 "data_size": 65536 00:14:49.723 }, 00:14:49.723 { 00:14:49.723 "name": "BaseBdev4", 00:14:49.723 "uuid": "971b3a18-0908-4a77-98de-05bf23551398", 00:14:49.723 "is_configured": true, 00:14:49.723 "data_offset": 0, 00:14:49.723 "data_size": 65536 00:14:49.723 } 00:14:49.723 ] 00:14:49.723 } 00:14:49.723 } 00:14:49.723 }' 00:14:49.723 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:49.723 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:49.723 BaseBdev2 00:14:49.723 BaseBdev3 00:14:49.723 BaseBdev4' 00:14:49.723 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.982 [2024-11-27 08:46:46.703792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.982 [2024-11-27 08:46:46.703832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:49.982 [2024-11-27 08:46:46.703955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:49.982 [2024-11-27 08:46:46.704390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:49.982 [2024-11-27 08:46:46.704415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73476 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' -z 73476 ']' 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # kill -0 73476 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # uname 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:14:49.982 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 73476 00:14:50.276 killing process with pid 73476 00:14:50.276 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:14:50.276 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:14:50.276 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 73476' 00:14:50.276 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # kill 73476 00:14:50.276 [2024-11-27 08:46:46.739677] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.276 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@975 -- # wait 73476 00:14:50.535 [2024-11-27 08:46:47.117810] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.469 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:51.469 00:14:51.470 real 0m12.907s 00:14:51.470 user 0m21.264s 00:14:51.470 sys 0m1.827s 00:14:51.470 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:14:51.470 ************************************ 00:14:51.470 END TEST raid_state_function_test 00:14:51.470 ************************************ 00:14:51.470 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.750 08:46:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:14:51.750 08:46:48 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:14:51.750 08:46:48 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:14:51.750 08:46:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:51.750 ************************************ 00:14:51.750 START TEST raid_state_function_test_sb 00:14:51.750 ************************************ 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # raid_state_function_test raid1 4 true 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:51.750 Process raid pid: 74159 00:14:51.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74159 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74159' 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74159 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # '[' -z 74159 ']' 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local max_retries=100 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@841 -- # xtrace_disable 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.750 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:51.750 [2024-11-27 08:46:48.368204] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:14:51.750 [2024-11-27 08:46:48.368379] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.010 [2024-11-27 08:46:48.544099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.010 [2024-11-27 08:46:48.690220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.268 [2024-11-27 08:46:48.916222] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.268 [2024-11-27 08:46:48.916290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.836 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:14:52.836 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@865 -- # return 0 00:14:52.836 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:52.836 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.836 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.836 [2024-11-27 08:46:49.312533] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.836 [2024-11-27 08:46:49.312610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.836 [2024-11-27 08:46:49.312628] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.836 [2024-11-27 08:46:49.312647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.836 [2024-11-27 08:46:49.312657] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:52.836 [2024-11-27 08:46:49.312673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:52.836 [2024-11-27 08:46:49.312683] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:52.836 [2024-11-27 08:46:49.312698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:52.836 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.836 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:52.836 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.836 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.837 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.837 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.837 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.837 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.837 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.837 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.837 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.837 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.837 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.837 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.837 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.837 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.837 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.837 "name": "Existed_Raid", 00:14:52.837 "uuid": "d1bd54a8-77ea-4643-bd12-f200de1782f5", 00:14:52.837 "strip_size_kb": 0, 00:14:52.837 "state": "configuring", 00:14:52.837 "raid_level": "raid1", 00:14:52.837 "superblock": true, 00:14:52.837 "num_base_bdevs": 4, 00:14:52.837 "num_base_bdevs_discovered": 0, 00:14:52.837 "num_base_bdevs_operational": 4, 00:14:52.837 "base_bdevs_list": [ 00:14:52.837 { 00:14:52.837 "name": "BaseBdev1", 00:14:52.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.837 "is_configured": false, 00:14:52.837 "data_offset": 0, 00:14:52.837 "data_size": 0 00:14:52.837 }, 00:14:52.837 { 00:14:52.837 "name": "BaseBdev2", 00:14:52.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.837 "is_configured": false, 00:14:52.837 "data_offset": 0, 00:14:52.837 "data_size": 0 00:14:52.837 }, 00:14:52.837 { 00:14:52.837 "name": "BaseBdev3", 00:14:52.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.837 "is_configured": false, 00:14:52.837 "data_offset": 0, 00:14:52.837 "data_size": 0 00:14:52.837 }, 00:14:52.837 { 00:14:52.837 "name": "BaseBdev4", 00:14:52.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.837 "is_configured": false, 00:14:52.837 "data_offset": 0, 00:14:52.837 "data_size": 0 00:14:52.837 } 00:14:52.837 ] 00:14:52.837 }' 00:14:52.837 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.837 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.095 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:53.095 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.096 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.096 [2024-11-27 08:46:49.836576] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.096 [2024-11-27 08:46:49.836638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:53.096 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.096 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:53.096 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.096 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.096 [2024-11-27 08:46:49.844551] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:53.096 [2024-11-27 08:46:49.844606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:53.096 [2024-11-27 08:46:49.844622] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:53.096 [2024-11-27 08:46:49.844638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:53.096 [2024-11-27 08:46:49.844648] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:53.096 [2024-11-27 08:46:49.844664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:53.096 [2024-11-27 08:46:49.844674] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:53.096 [2024-11-27 08:46:49.844689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:53.096 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.096 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:53.096 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.096 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.355 [2024-11-27 08:46:49.892759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.355 BaseBdev1 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.355 [ 00:14:53.355 { 00:14:53.355 "name": "BaseBdev1", 00:14:53.355 "aliases": [ 00:14:53.355 "0117bff5-356d-40bf-995a-24d6ccbbb014" 00:14:53.355 ], 00:14:53.355 "product_name": "Malloc disk", 00:14:53.355 "block_size": 512, 00:14:53.355 "num_blocks": 65536, 00:14:53.355 "uuid": "0117bff5-356d-40bf-995a-24d6ccbbb014", 00:14:53.355 "assigned_rate_limits": { 00:14:53.355 "rw_ios_per_sec": 0, 00:14:53.355 "rw_mbytes_per_sec": 0, 00:14:53.355 "r_mbytes_per_sec": 0, 00:14:53.355 "w_mbytes_per_sec": 0 00:14:53.355 }, 00:14:53.355 "claimed": true, 00:14:53.355 "claim_type": "exclusive_write", 00:14:53.355 "zoned": false, 00:14:53.355 "supported_io_types": { 00:14:53.355 "read": true, 00:14:53.355 "write": true, 00:14:53.355 "unmap": true, 00:14:53.355 "flush": true, 00:14:53.355 "reset": true, 00:14:53.355 "nvme_admin": false, 00:14:53.355 "nvme_io": false, 00:14:53.355 "nvme_io_md": false, 00:14:53.355 "write_zeroes": true, 00:14:53.355 "zcopy": true, 00:14:53.355 "get_zone_info": false, 00:14:53.355 "zone_management": false, 00:14:53.355 "zone_append": false, 00:14:53.355 "compare": false, 00:14:53.355 "compare_and_write": false, 00:14:53.355 "abort": true, 00:14:53.355 "seek_hole": false, 00:14:53.355 "seek_data": false, 00:14:53.355 "copy": true, 00:14:53.355 "nvme_iov_md": false 00:14:53.355 }, 00:14:53.355 "memory_domains": [ 00:14:53.355 { 00:14:53.355 "dma_device_id": "system", 00:14:53.355 "dma_device_type": 1 00:14:53.355 }, 00:14:53.355 { 00:14:53.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.355 "dma_device_type": 2 00:14:53.355 } 00:14:53.355 ], 00:14:53.355 "driver_specific": {} 00:14:53.355 } 00:14:53.355 ] 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.355 "name": "Existed_Raid", 00:14:53.355 "uuid": "12fccafe-9d89-45db-b71c-db26fa0101b5", 00:14:53.355 "strip_size_kb": 0, 00:14:53.355 "state": "configuring", 00:14:53.355 "raid_level": "raid1", 00:14:53.355 "superblock": true, 00:14:53.355 "num_base_bdevs": 4, 00:14:53.355 "num_base_bdevs_discovered": 1, 00:14:53.355 "num_base_bdevs_operational": 4, 00:14:53.355 "base_bdevs_list": [ 00:14:53.355 { 00:14:53.355 "name": "BaseBdev1", 00:14:53.355 "uuid": "0117bff5-356d-40bf-995a-24d6ccbbb014", 00:14:53.355 "is_configured": true, 00:14:53.355 "data_offset": 2048, 00:14:53.355 "data_size": 63488 00:14:53.355 }, 00:14:53.355 { 00:14:53.355 "name": "BaseBdev2", 00:14:53.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.355 "is_configured": false, 00:14:53.355 "data_offset": 0, 00:14:53.355 "data_size": 0 00:14:53.355 }, 00:14:53.355 { 00:14:53.355 "name": "BaseBdev3", 00:14:53.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.355 "is_configured": false, 00:14:53.355 "data_offset": 0, 00:14:53.355 "data_size": 0 00:14:53.355 }, 00:14:53.355 { 00:14:53.355 "name": "BaseBdev4", 00:14:53.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.355 "is_configured": false, 00:14:53.355 "data_offset": 0, 00:14:53.355 "data_size": 0 00:14:53.355 } 00:14:53.355 ] 00:14:53.355 }' 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.355 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.923 [2024-11-27 08:46:50.376962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.923 [2024-11-27 08:46:50.377042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.923 [2024-11-27 08:46:50.385015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.923 [2024-11-27 08:46:50.387676] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:53.923 [2024-11-27 08:46:50.387737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:53.923 [2024-11-27 08:46:50.387754] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:53.923 [2024-11-27 08:46:50.387772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:53.923 [2024-11-27 08:46:50.387783] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:53.923 [2024-11-27 08:46:50.387797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.923 "name": "Existed_Raid", 00:14:53.923 "uuid": "2bea0cfe-7cb6-42e9-8995-fd82c0b96430", 00:14:53.923 "strip_size_kb": 0, 00:14:53.923 "state": "configuring", 00:14:53.923 "raid_level": "raid1", 00:14:53.923 "superblock": true, 00:14:53.923 "num_base_bdevs": 4, 00:14:53.923 "num_base_bdevs_discovered": 1, 00:14:53.923 "num_base_bdevs_operational": 4, 00:14:53.923 "base_bdevs_list": [ 00:14:53.923 { 00:14:53.923 "name": "BaseBdev1", 00:14:53.923 "uuid": "0117bff5-356d-40bf-995a-24d6ccbbb014", 00:14:53.923 "is_configured": true, 00:14:53.923 "data_offset": 2048, 00:14:53.923 "data_size": 63488 00:14:53.923 }, 00:14:53.923 { 00:14:53.923 "name": "BaseBdev2", 00:14:53.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.923 "is_configured": false, 00:14:53.923 "data_offset": 0, 00:14:53.923 "data_size": 0 00:14:53.923 }, 00:14:53.923 { 00:14:53.923 "name": "BaseBdev3", 00:14:53.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.923 "is_configured": false, 00:14:53.923 "data_offset": 0, 00:14:53.923 "data_size": 0 00:14:53.923 }, 00:14:53.923 { 00:14:53.923 "name": "BaseBdev4", 00:14:53.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.923 "is_configured": false, 00:14:53.923 "data_offset": 0, 00:14:53.923 "data_size": 0 00:14:53.923 } 00:14:53.923 ] 00:14:53.923 }' 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.923 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.182 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:54.182 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.182 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.516 [2024-11-27 08:46:50.942805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.516 BaseBdev2 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.516 [ 00:14:54.516 { 00:14:54.516 "name": "BaseBdev2", 00:14:54.516 "aliases": [ 00:14:54.516 "c7c40e5e-ba30-4065-9ae3-f7db56dcdce1" 00:14:54.516 ], 00:14:54.516 "product_name": "Malloc disk", 00:14:54.516 "block_size": 512, 00:14:54.516 "num_blocks": 65536, 00:14:54.516 "uuid": "c7c40e5e-ba30-4065-9ae3-f7db56dcdce1", 00:14:54.516 "assigned_rate_limits": { 00:14:54.516 "rw_ios_per_sec": 0, 00:14:54.516 "rw_mbytes_per_sec": 0, 00:14:54.516 "r_mbytes_per_sec": 0, 00:14:54.516 "w_mbytes_per_sec": 0 00:14:54.516 }, 00:14:54.516 "claimed": true, 00:14:54.516 "claim_type": "exclusive_write", 00:14:54.516 "zoned": false, 00:14:54.516 "supported_io_types": { 00:14:54.516 "read": true, 00:14:54.516 "write": true, 00:14:54.516 "unmap": true, 00:14:54.516 "flush": true, 00:14:54.516 "reset": true, 00:14:54.516 "nvme_admin": false, 00:14:54.516 "nvme_io": false, 00:14:54.516 "nvme_io_md": false, 00:14:54.516 "write_zeroes": true, 00:14:54.516 "zcopy": true, 00:14:54.516 "get_zone_info": false, 00:14:54.516 "zone_management": false, 00:14:54.516 "zone_append": false, 00:14:54.516 "compare": false, 00:14:54.516 "compare_and_write": false, 00:14:54.516 "abort": true, 00:14:54.516 "seek_hole": false, 00:14:54.516 "seek_data": false, 00:14:54.516 "copy": true, 00:14:54.516 "nvme_iov_md": false 00:14:54.516 }, 00:14:54.516 "memory_domains": [ 00:14:54.516 { 00:14:54.516 "dma_device_id": "system", 00:14:54.516 "dma_device_type": 1 00:14:54.516 }, 00:14:54.516 { 00:14:54.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.516 "dma_device_type": 2 00:14:54.516 } 00:14:54.516 ], 00:14:54.516 "driver_specific": {} 00:14:54.516 } 00:14:54.516 ] 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.516 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.516 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.516 "name": "Existed_Raid", 00:14:54.516 "uuid": "2bea0cfe-7cb6-42e9-8995-fd82c0b96430", 00:14:54.516 "strip_size_kb": 0, 00:14:54.516 "state": "configuring", 00:14:54.516 "raid_level": "raid1", 00:14:54.516 "superblock": true, 00:14:54.516 "num_base_bdevs": 4, 00:14:54.516 "num_base_bdevs_discovered": 2, 00:14:54.516 "num_base_bdevs_operational": 4, 00:14:54.516 "base_bdevs_list": [ 00:14:54.516 { 00:14:54.516 "name": "BaseBdev1", 00:14:54.516 "uuid": "0117bff5-356d-40bf-995a-24d6ccbbb014", 00:14:54.516 "is_configured": true, 00:14:54.516 "data_offset": 2048, 00:14:54.516 "data_size": 63488 00:14:54.516 }, 00:14:54.516 { 00:14:54.517 "name": "BaseBdev2", 00:14:54.517 "uuid": "c7c40e5e-ba30-4065-9ae3-f7db56dcdce1", 00:14:54.517 "is_configured": true, 00:14:54.517 "data_offset": 2048, 00:14:54.517 "data_size": 63488 00:14:54.517 }, 00:14:54.517 { 00:14:54.517 "name": "BaseBdev3", 00:14:54.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.517 "is_configured": false, 00:14:54.517 "data_offset": 0, 00:14:54.517 "data_size": 0 00:14:54.517 }, 00:14:54.517 { 00:14:54.517 "name": "BaseBdev4", 00:14:54.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.517 "is_configured": false, 00:14:54.517 "data_offset": 0, 00:14:54.517 "data_size": 0 00:14:54.517 } 00:14:54.517 ] 00:14:54.517 }' 00:14:54.517 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.517 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.775 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:54.775 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.775 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.775 [2024-11-27 08:46:51.525063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.775 BaseBdev3 00:14:54.775 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.775 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:54.775 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:14:54.775 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:54.775 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:54.775 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:54.775 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:54.775 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:54.775 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.775 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.034 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.034 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:55.034 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.034 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.034 [ 00:14:55.034 { 00:14:55.034 "name": "BaseBdev3", 00:14:55.034 "aliases": [ 00:14:55.034 "e8041ed8-a15d-46c3-b27d-e3ae765f1560" 00:14:55.034 ], 00:14:55.034 "product_name": "Malloc disk", 00:14:55.034 "block_size": 512, 00:14:55.034 "num_blocks": 65536, 00:14:55.035 "uuid": "e8041ed8-a15d-46c3-b27d-e3ae765f1560", 00:14:55.035 "assigned_rate_limits": { 00:14:55.035 "rw_ios_per_sec": 0, 00:14:55.035 "rw_mbytes_per_sec": 0, 00:14:55.035 "r_mbytes_per_sec": 0, 00:14:55.035 "w_mbytes_per_sec": 0 00:14:55.035 }, 00:14:55.035 "claimed": true, 00:14:55.035 "claim_type": "exclusive_write", 00:14:55.035 "zoned": false, 00:14:55.035 "supported_io_types": { 00:14:55.035 "read": true, 00:14:55.035 "write": true, 00:14:55.035 "unmap": true, 00:14:55.035 "flush": true, 00:14:55.035 "reset": true, 00:14:55.035 "nvme_admin": false, 00:14:55.035 "nvme_io": false, 00:14:55.035 "nvme_io_md": false, 00:14:55.035 "write_zeroes": true, 00:14:55.035 "zcopy": true, 00:14:55.035 "get_zone_info": false, 00:14:55.035 "zone_management": false, 00:14:55.035 "zone_append": false, 00:14:55.035 "compare": false, 00:14:55.035 "compare_and_write": false, 00:14:55.035 "abort": true, 00:14:55.035 "seek_hole": false, 00:14:55.035 "seek_data": false, 00:14:55.035 "copy": true, 00:14:55.035 "nvme_iov_md": false 00:14:55.035 }, 00:14:55.035 "memory_domains": [ 00:14:55.035 { 00:14:55.035 "dma_device_id": "system", 00:14:55.035 "dma_device_type": 1 00:14:55.035 }, 00:14:55.035 { 00:14:55.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.035 "dma_device_type": 2 00:14:55.035 } 00:14:55.035 ], 00:14:55.035 "driver_specific": {} 00:14:55.035 } 00:14:55.035 ] 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.035 "name": "Existed_Raid", 00:14:55.035 "uuid": "2bea0cfe-7cb6-42e9-8995-fd82c0b96430", 00:14:55.035 "strip_size_kb": 0, 00:14:55.035 "state": "configuring", 00:14:55.035 "raid_level": "raid1", 00:14:55.035 "superblock": true, 00:14:55.035 "num_base_bdevs": 4, 00:14:55.035 "num_base_bdevs_discovered": 3, 00:14:55.035 "num_base_bdevs_operational": 4, 00:14:55.035 "base_bdevs_list": [ 00:14:55.035 { 00:14:55.035 "name": "BaseBdev1", 00:14:55.035 "uuid": "0117bff5-356d-40bf-995a-24d6ccbbb014", 00:14:55.035 "is_configured": true, 00:14:55.035 "data_offset": 2048, 00:14:55.035 "data_size": 63488 00:14:55.035 }, 00:14:55.035 { 00:14:55.035 "name": "BaseBdev2", 00:14:55.035 "uuid": "c7c40e5e-ba30-4065-9ae3-f7db56dcdce1", 00:14:55.035 "is_configured": true, 00:14:55.035 "data_offset": 2048, 00:14:55.035 "data_size": 63488 00:14:55.035 }, 00:14:55.035 { 00:14:55.035 "name": "BaseBdev3", 00:14:55.035 "uuid": "e8041ed8-a15d-46c3-b27d-e3ae765f1560", 00:14:55.035 "is_configured": true, 00:14:55.035 "data_offset": 2048, 00:14:55.035 "data_size": 63488 00:14:55.035 }, 00:14:55.035 { 00:14:55.035 "name": "BaseBdev4", 00:14:55.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.035 "is_configured": false, 00:14:55.035 "data_offset": 0, 00:14:55.035 "data_size": 0 00:14:55.035 } 00:14:55.035 ] 00:14:55.035 }' 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.035 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.602 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:55.602 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.602 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.602 [2024-11-27 08:46:52.142859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:55.603 [2024-11-27 08:46:52.143227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:55.603 [2024-11-27 08:46:52.143248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:55.603 [2024-11-27 08:46:52.143629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:55.603 BaseBdev4 00:14:55.603 [2024-11-27 08:46:52.143842] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:55.603 [2024-11-27 08:46:52.143864] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:55.603 [2024-11-27 08:46:52.144053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.603 [ 00:14:55.603 { 00:14:55.603 "name": "BaseBdev4", 00:14:55.603 "aliases": [ 00:14:55.603 "0fbd9bdb-018c-46de-a7e6-d216aa782dac" 00:14:55.603 ], 00:14:55.603 "product_name": "Malloc disk", 00:14:55.603 "block_size": 512, 00:14:55.603 "num_blocks": 65536, 00:14:55.603 "uuid": "0fbd9bdb-018c-46de-a7e6-d216aa782dac", 00:14:55.603 "assigned_rate_limits": { 00:14:55.603 "rw_ios_per_sec": 0, 00:14:55.603 "rw_mbytes_per_sec": 0, 00:14:55.603 "r_mbytes_per_sec": 0, 00:14:55.603 "w_mbytes_per_sec": 0 00:14:55.603 }, 00:14:55.603 "claimed": true, 00:14:55.603 "claim_type": "exclusive_write", 00:14:55.603 "zoned": false, 00:14:55.603 "supported_io_types": { 00:14:55.603 "read": true, 00:14:55.603 "write": true, 00:14:55.603 "unmap": true, 00:14:55.603 "flush": true, 00:14:55.603 "reset": true, 00:14:55.603 "nvme_admin": false, 00:14:55.603 "nvme_io": false, 00:14:55.603 "nvme_io_md": false, 00:14:55.603 "write_zeroes": true, 00:14:55.603 "zcopy": true, 00:14:55.603 "get_zone_info": false, 00:14:55.603 "zone_management": false, 00:14:55.603 "zone_append": false, 00:14:55.603 "compare": false, 00:14:55.603 "compare_and_write": false, 00:14:55.603 "abort": true, 00:14:55.603 "seek_hole": false, 00:14:55.603 "seek_data": false, 00:14:55.603 "copy": true, 00:14:55.603 "nvme_iov_md": false 00:14:55.603 }, 00:14:55.603 "memory_domains": [ 00:14:55.603 { 00:14:55.603 "dma_device_id": "system", 00:14:55.603 "dma_device_type": 1 00:14:55.603 }, 00:14:55.603 { 00:14:55.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.603 "dma_device_type": 2 00:14:55.603 } 00:14:55.603 ], 00:14:55.603 "driver_specific": {} 00:14:55.603 } 00:14:55.603 ] 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.603 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.603 "name": "Existed_Raid", 00:14:55.603 "uuid": "2bea0cfe-7cb6-42e9-8995-fd82c0b96430", 00:14:55.603 "strip_size_kb": 0, 00:14:55.603 "state": "online", 00:14:55.603 "raid_level": "raid1", 00:14:55.603 "superblock": true, 00:14:55.603 "num_base_bdevs": 4, 00:14:55.603 "num_base_bdevs_discovered": 4, 00:14:55.603 "num_base_bdevs_operational": 4, 00:14:55.603 "base_bdevs_list": [ 00:14:55.603 { 00:14:55.603 "name": "BaseBdev1", 00:14:55.603 "uuid": "0117bff5-356d-40bf-995a-24d6ccbbb014", 00:14:55.603 "is_configured": true, 00:14:55.603 "data_offset": 2048, 00:14:55.603 "data_size": 63488 00:14:55.603 }, 00:14:55.603 { 00:14:55.603 "name": "BaseBdev2", 00:14:55.603 "uuid": "c7c40e5e-ba30-4065-9ae3-f7db56dcdce1", 00:14:55.603 "is_configured": true, 00:14:55.603 "data_offset": 2048, 00:14:55.603 "data_size": 63488 00:14:55.603 }, 00:14:55.603 { 00:14:55.603 "name": "BaseBdev3", 00:14:55.603 "uuid": "e8041ed8-a15d-46c3-b27d-e3ae765f1560", 00:14:55.603 "is_configured": true, 00:14:55.603 "data_offset": 2048, 00:14:55.603 "data_size": 63488 00:14:55.603 }, 00:14:55.603 { 00:14:55.603 "name": "BaseBdev4", 00:14:55.603 "uuid": "0fbd9bdb-018c-46de-a7e6-d216aa782dac", 00:14:55.603 "is_configured": true, 00:14:55.603 "data_offset": 2048, 00:14:55.603 "data_size": 63488 00:14:55.604 } 00:14:55.604 ] 00:14:55.604 }' 00:14:55.604 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.604 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.171 [2024-11-27 08:46:52.731564] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:56.171 "name": "Existed_Raid", 00:14:56.171 "aliases": [ 00:14:56.171 "2bea0cfe-7cb6-42e9-8995-fd82c0b96430" 00:14:56.171 ], 00:14:56.171 "product_name": "Raid Volume", 00:14:56.171 "block_size": 512, 00:14:56.171 "num_blocks": 63488, 00:14:56.171 "uuid": "2bea0cfe-7cb6-42e9-8995-fd82c0b96430", 00:14:56.171 "assigned_rate_limits": { 00:14:56.171 "rw_ios_per_sec": 0, 00:14:56.171 "rw_mbytes_per_sec": 0, 00:14:56.171 "r_mbytes_per_sec": 0, 00:14:56.171 "w_mbytes_per_sec": 0 00:14:56.171 }, 00:14:56.171 "claimed": false, 00:14:56.171 "zoned": false, 00:14:56.171 "supported_io_types": { 00:14:56.171 "read": true, 00:14:56.171 "write": true, 00:14:56.171 "unmap": false, 00:14:56.171 "flush": false, 00:14:56.171 "reset": true, 00:14:56.171 "nvme_admin": false, 00:14:56.171 "nvme_io": false, 00:14:56.171 "nvme_io_md": false, 00:14:56.171 "write_zeroes": true, 00:14:56.171 "zcopy": false, 00:14:56.171 "get_zone_info": false, 00:14:56.171 "zone_management": false, 00:14:56.171 "zone_append": false, 00:14:56.171 "compare": false, 00:14:56.171 "compare_and_write": false, 00:14:56.171 "abort": false, 00:14:56.171 "seek_hole": false, 00:14:56.171 "seek_data": false, 00:14:56.171 "copy": false, 00:14:56.171 "nvme_iov_md": false 00:14:56.171 }, 00:14:56.171 "memory_domains": [ 00:14:56.171 { 00:14:56.171 "dma_device_id": "system", 00:14:56.171 "dma_device_type": 1 00:14:56.171 }, 00:14:56.171 { 00:14:56.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.171 "dma_device_type": 2 00:14:56.171 }, 00:14:56.171 { 00:14:56.171 "dma_device_id": "system", 00:14:56.171 "dma_device_type": 1 00:14:56.171 }, 00:14:56.171 { 00:14:56.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.171 "dma_device_type": 2 00:14:56.171 }, 00:14:56.171 { 00:14:56.171 "dma_device_id": "system", 00:14:56.171 "dma_device_type": 1 00:14:56.171 }, 00:14:56.171 { 00:14:56.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.171 "dma_device_type": 2 00:14:56.171 }, 00:14:56.171 { 00:14:56.171 "dma_device_id": "system", 00:14:56.171 "dma_device_type": 1 00:14:56.171 }, 00:14:56.171 { 00:14:56.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.171 "dma_device_type": 2 00:14:56.171 } 00:14:56.171 ], 00:14:56.171 "driver_specific": { 00:14:56.171 "raid": { 00:14:56.171 "uuid": "2bea0cfe-7cb6-42e9-8995-fd82c0b96430", 00:14:56.171 "strip_size_kb": 0, 00:14:56.171 "state": "online", 00:14:56.171 "raid_level": "raid1", 00:14:56.171 "superblock": true, 00:14:56.171 "num_base_bdevs": 4, 00:14:56.171 "num_base_bdevs_discovered": 4, 00:14:56.171 "num_base_bdevs_operational": 4, 00:14:56.171 "base_bdevs_list": [ 00:14:56.171 { 00:14:56.171 "name": "BaseBdev1", 00:14:56.171 "uuid": "0117bff5-356d-40bf-995a-24d6ccbbb014", 00:14:56.171 "is_configured": true, 00:14:56.171 "data_offset": 2048, 00:14:56.171 "data_size": 63488 00:14:56.171 }, 00:14:56.171 { 00:14:56.171 "name": "BaseBdev2", 00:14:56.171 "uuid": "c7c40e5e-ba30-4065-9ae3-f7db56dcdce1", 00:14:56.171 "is_configured": true, 00:14:56.171 "data_offset": 2048, 00:14:56.171 "data_size": 63488 00:14:56.171 }, 00:14:56.171 { 00:14:56.171 "name": "BaseBdev3", 00:14:56.171 "uuid": "e8041ed8-a15d-46c3-b27d-e3ae765f1560", 00:14:56.171 "is_configured": true, 00:14:56.171 "data_offset": 2048, 00:14:56.171 "data_size": 63488 00:14:56.171 }, 00:14:56.171 { 00:14:56.171 "name": "BaseBdev4", 00:14:56.171 "uuid": "0fbd9bdb-018c-46de-a7e6-d216aa782dac", 00:14:56.171 "is_configured": true, 00:14:56.171 "data_offset": 2048, 00:14:56.171 "data_size": 63488 00:14:56.171 } 00:14:56.171 ] 00:14:56.171 } 00:14:56.171 } 00:14:56.171 }' 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:56.171 BaseBdev2 00:14:56.171 BaseBdev3 00:14:56.171 BaseBdev4' 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.171 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.430 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.430 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.430 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.430 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:56.430 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.430 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.430 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.430 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.430 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.430 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.430 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.430 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:56.430 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.430 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.430 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.430 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.430 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.430 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.430 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.430 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.430 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:56.430 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.430 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.431 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.431 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.431 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.431 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:56.431 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.431 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.431 [2024-11-27 08:46:53.099353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.688 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.688 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.689 "name": "Existed_Raid", 00:14:56.689 "uuid": "2bea0cfe-7cb6-42e9-8995-fd82c0b96430", 00:14:56.689 "strip_size_kb": 0, 00:14:56.689 "state": "online", 00:14:56.689 "raid_level": "raid1", 00:14:56.689 "superblock": true, 00:14:56.689 "num_base_bdevs": 4, 00:14:56.689 "num_base_bdevs_discovered": 3, 00:14:56.689 "num_base_bdevs_operational": 3, 00:14:56.689 "base_bdevs_list": [ 00:14:56.689 { 00:14:56.689 "name": null, 00:14:56.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.689 "is_configured": false, 00:14:56.689 "data_offset": 0, 00:14:56.689 "data_size": 63488 00:14:56.689 }, 00:14:56.689 { 00:14:56.689 "name": "BaseBdev2", 00:14:56.689 "uuid": "c7c40e5e-ba30-4065-9ae3-f7db56dcdce1", 00:14:56.689 "is_configured": true, 00:14:56.689 "data_offset": 2048, 00:14:56.689 "data_size": 63488 00:14:56.689 }, 00:14:56.689 { 00:14:56.689 "name": "BaseBdev3", 00:14:56.689 "uuid": "e8041ed8-a15d-46c3-b27d-e3ae765f1560", 00:14:56.689 "is_configured": true, 00:14:56.689 "data_offset": 2048, 00:14:56.689 "data_size": 63488 00:14:56.689 }, 00:14:56.689 { 00:14:56.689 "name": "BaseBdev4", 00:14:56.689 "uuid": "0fbd9bdb-018c-46de-a7e6-d216aa782dac", 00:14:56.689 "is_configured": true, 00:14:56.689 "data_offset": 2048, 00:14:56.689 "data_size": 63488 00:14:56.689 } 00:14:56.689 ] 00:14:56.689 }' 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.689 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.082 [2024-11-27 08:46:53.690230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:57.082 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.340 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.340 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:57.340 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:57.340 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:57.340 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.340 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.340 [2024-11-27 08:46:53.842827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:57.340 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.340 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:57.340 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:57.341 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.341 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.341 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:57.341 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.341 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.341 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:57.341 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:57.341 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:57.341 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.341 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.341 [2024-11-27 08:46:53.993897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:57.341 [2024-11-27 08:46:53.994207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.341 [2024-11-27 08:46:54.085446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.341 [2024-11-27 08:46:54.085726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.341 [2024-11-27 08:46:54.085901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:57.341 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.341 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:57.341 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:57.341 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:57.341 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.341 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.341 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.600 BaseBdev2 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.600 [ 00:14:57.600 { 00:14:57.600 "name": "BaseBdev2", 00:14:57.600 "aliases": [ 00:14:57.600 "d622b477-b582-4c95-a764-67465242ad97" 00:14:57.600 ], 00:14:57.600 "product_name": "Malloc disk", 00:14:57.600 "block_size": 512, 00:14:57.600 "num_blocks": 65536, 00:14:57.600 "uuid": "d622b477-b582-4c95-a764-67465242ad97", 00:14:57.600 "assigned_rate_limits": { 00:14:57.600 "rw_ios_per_sec": 0, 00:14:57.600 "rw_mbytes_per_sec": 0, 00:14:57.600 "r_mbytes_per_sec": 0, 00:14:57.600 "w_mbytes_per_sec": 0 00:14:57.600 }, 00:14:57.600 "claimed": false, 00:14:57.600 "zoned": false, 00:14:57.600 "supported_io_types": { 00:14:57.600 "read": true, 00:14:57.600 "write": true, 00:14:57.600 "unmap": true, 00:14:57.600 "flush": true, 00:14:57.600 "reset": true, 00:14:57.600 "nvme_admin": false, 00:14:57.600 "nvme_io": false, 00:14:57.600 "nvme_io_md": false, 00:14:57.600 "write_zeroes": true, 00:14:57.600 "zcopy": true, 00:14:57.600 "get_zone_info": false, 00:14:57.600 "zone_management": false, 00:14:57.600 "zone_append": false, 00:14:57.600 "compare": false, 00:14:57.600 "compare_and_write": false, 00:14:57.600 "abort": true, 00:14:57.600 "seek_hole": false, 00:14:57.600 "seek_data": false, 00:14:57.600 "copy": true, 00:14:57.600 "nvme_iov_md": false 00:14:57.600 }, 00:14:57.600 "memory_domains": [ 00:14:57.600 { 00:14:57.600 "dma_device_id": "system", 00:14:57.600 "dma_device_type": 1 00:14:57.600 }, 00:14:57.600 { 00:14:57.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.600 "dma_device_type": 2 00:14:57.600 } 00:14:57.600 ], 00:14:57.600 "driver_specific": {} 00:14:57.600 } 00:14:57.600 ] 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.600 BaseBdev3 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.600 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.600 [ 00:14:57.600 { 00:14:57.600 "name": "BaseBdev3", 00:14:57.600 "aliases": [ 00:14:57.600 "8af03d03-1f7b-4ae1-baae-c8caf2d94977" 00:14:57.600 ], 00:14:57.600 "product_name": "Malloc disk", 00:14:57.600 "block_size": 512, 00:14:57.600 "num_blocks": 65536, 00:14:57.600 "uuid": "8af03d03-1f7b-4ae1-baae-c8caf2d94977", 00:14:57.600 "assigned_rate_limits": { 00:14:57.600 "rw_ios_per_sec": 0, 00:14:57.600 "rw_mbytes_per_sec": 0, 00:14:57.600 "r_mbytes_per_sec": 0, 00:14:57.600 "w_mbytes_per_sec": 0 00:14:57.600 }, 00:14:57.600 "claimed": false, 00:14:57.600 "zoned": false, 00:14:57.600 "supported_io_types": { 00:14:57.600 "read": true, 00:14:57.600 "write": true, 00:14:57.600 "unmap": true, 00:14:57.600 "flush": true, 00:14:57.600 "reset": true, 00:14:57.601 "nvme_admin": false, 00:14:57.601 "nvme_io": false, 00:14:57.601 "nvme_io_md": false, 00:14:57.601 "write_zeroes": true, 00:14:57.601 "zcopy": true, 00:14:57.601 "get_zone_info": false, 00:14:57.601 "zone_management": false, 00:14:57.601 "zone_append": false, 00:14:57.601 "compare": false, 00:14:57.601 "compare_and_write": false, 00:14:57.601 "abort": true, 00:14:57.601 "seek_hole": false, 00:14:57.601 "seek_data": false, 00:14:57.601 "copy": true, 00:14:57.601 "nvme_iov_md": false 00:14:57.601 }, 00:14:57.601 "memory_domains": [ 00:14:57.601 { 00:14:57.601 "dma_device_id": "system", 00:14:57.601 "dma_device_type": 1 00:14:57.601 }, 00:14:57.601 { 00:14:57.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.601 "dma_device_type": 2 00:14:57.601 } 00:14:57.601 ], 00:14:57.601 "driver_specific": {} 00:14:57.601 } 00:14:57.601 ] 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.601 BaseBdev4 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.601 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.860 [ 00:14:57.860 { 00:14:57.860 "name": "BaseBdev4", 00:14:57.860 "aliases": [ 00:14:57.860 "e82643da-2542-45c2-baca-be183da4a1bb" 00:14:57.860 ], 00:14:57.860 "product_name": "Malloc disk", 00:14:57.860 "block_size": 512, 00:14:57.860 "num_blocks": 65536, 00:14:57.860 "uuid": "e82643da-2542-45c2-baca-be183da4a1bb", 00:14:57.860 "assigned_rate_limits": { 00:14:57.860 "rw_ios_per_sec": 0, 00:14:57.860 "rw_mbytes_per_sec": 0, 00:14:57.860 "r_mbytes_per_sec": 0, 00:14:57.860 "w_mbytes_per_sec": 0 00:14:57.860 }, 00:14:57.860 "claimed": false, 00:14:57.860 "zoned": false, 00:14:57.860 "supported_io_types": { 00:14:57.860 "read": true, 00:14:57.860 "write": true, 00:14:57.860 "unmap": true, 00:14:57.860 "flush": true, 00:14:57.860 "reset": true, 00:14:57.860 "nvme_admin": false, 00:14:57.860 "nvme_io": false, 00:14:57.860 "nvme_io_md": false, 00:14:57.860 "write_zeroes": true, 00:14:57.860 "zcopy": true, 00:14:57.860 "get_zone_info": false, 00:14:57.860 "zone_management": false, 00:14:57.860 "zone_append": false, 00:14:57.860 "compare": false, 00:14:57.860 "compare_and_write": false, 00:14:57.860 "abort": true, 00:14:57.860 "seek_hole": false, 00:14:57.860 "seek_data": false, 00:14:57.860 "copy": true, 00:14:57.860 "nvme_iov_md": false 00:14:57.860 }, 00:14:57.860 "memory_domains": [ 00:14:57.860 { 00:14:57.860 "dma_device_id": "system", 00:14:57.860 "dma_device_type": 1 00:14:57.860 }, 00:14:57.860 { 00:14:57.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.860 "dma_device_type": 2 00:14:57.860 } 00:14:57.860 ], 00:14:57.860 "driver_specific": {} 00:14:57.860 } 00:14:57.860 ] 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.860 [2024-11-27 08:46:54.389358] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:57.860 [2024-11-27 08:46:54.389441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:57.860 [2024-11-27 08:46:54.389474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.860 [2024-11-27 08:46:54.392096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.860 [2024-11-27 08:46:54.392168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.860 "name": "Existed_Raid", 00:14:57.860 "uuid": "0d00c1b4-d58b-4e5e-b0e0-a287495db635", 00:14:57.860 "strip_size_kb": 0, 00:14:57.860 "state": "configuring", 00:14:57.860 "raid_level": "raid1", 00:14:57.860 "superblock": true, 00:14:57.860 "num_base_bdevs": 4, 00:14:57.860 "num_base_bdevs_discovered": 3, 00:14:57.860 "num_base_bdevs_operational": 4, 00:14:57.860 "base_bdevs_list": [ 00:14:57.860 { 00:14:57.860 "name": "BaseBdev1", 00:14:57.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.860 "is_configured": false, 00:14:57.860 "data_offset": 0, 00:14:57.860 "data_size": 0 00:14:57.860 }, 00:14:57.860 { 00:14:57.860 "name": "BaseBdev2", 00:14:57.860 "uuid": "d622b477-b582-4c95-a764-67465242ad97", 00:14:57.860 "is_configured": true, 00:14:57.860 "data_offset": 2048, 00:14:57.860 "data_size": 63488 00:14:57.860 }, 00:14:57.860 { 00:14:57.860 "name": "BaseBdev3", 00:14:57.860 "uuid": "8af03d03-1f7b-4ae1-baae-c8caf2d94977", 00:14:57.860 "is_configured": true, 00:14:57.860 "data_offset": 2048, 00:14:57.860 "data_size": 63488 00:14:57.860 }, 00:14:57.860 { 00:14:57.860 "name": "BaseBdev4", 00:14:57.860 "uuid": "e82643da-2542-45c2-baca-be183da4a1bb", 00:14:57.860 "is_configured": true, 00:14:57.860 "data_offset": 2048, 00:14:57.860 "data_size": 63488 00:14:57.860 } 00:14:57.860 ] 00:14:57.860 }' 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.860 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.427 [2024-11-27 08:46:54.913540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.427 "name": "Existed_Raid", 00:14:58.427 "uuid": "0d00c1b4-d58b-4e5e-b0e0-a287495db635", 00:14:58.427 "strip_size_kb": 0, 00:14:58.427 "state": "configuring", 00:14:58.427 "raid_level": "raid1", 00:14:58.427 "superblock": true, 00:14:58.427 "num_base_bdevs": 4, 00:14:58.427 "num_base_bdevs_discovered": 2, 00:14:58.427 "num_base_bdevs_operational": 4, 00:14:58.427 "base_bdevs_list": [ 00:14:58.427 { 00:14:58.427 "name": "BaseBdev1", 00:14:58.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.427 "is_configured": false, 00:14:58.427 "data_offset": 0, 00:14:58.427 "data_size": 0 00:14:58.427 }, 00:14:58.427 { 00:14:58.427 "name": null, 00:14:58.427 "uuid": "d622b477-b582-4c95-a764-67465242ad97", 00:14:58.427 "is_configured": false, 00:14:58.427 "data_offset": 0, 00:14:58.427 "data_size": 63488 00:14:58.427 }, 00:14:58.427 { 00:14:58.427 "name": "BaseBdev3", 00:14:58.427 "uuid": "8af03d03-1f7b-4ae1-baae-c8caf2d94977", 00:14:58.427 "is_configured": true, 00:14:58.427 "data_offset": 2048, 00:14:58.427 "data_size": 63488 00:14:58.427 }, 00:14:58.427 { 00:14:58.427 "name": "BaseBdev4", 00:14:58.427 "uuid": "e82643da-2542-45c2-baca-be183da4a1bb", 00:14:58.427 "is_configured": true, 00:14:58.427 "data_offset": 2048, 00:14:58.427 "data_size": 63488 00:14:58.427 } 00:14:58.427 ] 00:14:58.427 }' 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.427 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.685 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.685 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.685 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.685 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:58.685 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.944 [2024-11-27 08:46:55.495713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.944 BaseBdev1 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.944 [ 00:14:58.944 { 00:14:58.944 "name": "BaseBdev1", 00:14:58.944 "aliases": [ 00:14:58.944 "b3486be5-7df6-4121-b3b3-6bd241feaf72" 00:14:58.944 ], 00:14:58.944 "product_name": "Malloc disk", 00:14:58.944 "block_size": 512, 00:14:58.944 "num_blocks": 65536, 00:14:58.944 "uuid": "b3486be5-7df6-4121-b3b3-6bd241feaf72", 00:14:58.944 "assigned_rate_limits": { 00:14:58.944 "rw_ios_per_sec": 0, 00:14:58.944 "rw_mbytes_per_sec": 0, 00:14:58.944 "r_mbytes_per_sec": 0, 00:14:58.944 "w_mbytes_per_sec": 0 00:14:58.944 }, 00:14:58.944 "claimed": true, 00:14:58.944 "claim_type": "exclusive_write", 00:14:58.944 "zoned": false, 00:14:58.944 "supported_io_types": { 00:14:58.944 "read": true, 00:14:58.944 "write": true, 00:14:58.944 "unmap": true, 00:14:58.944 "flush": true, 00:14:58.944 "reset": true, 00:14:58.944 "nvme_admin": false, 00:14:58.944 "nvme_io": false, 00:14:58.944 "nvme_io_md": false, 00:14:58.944 "write_zeroes": true, 00:14:58.944 "zcopy": true, 00:14:58.944 "get_zone_info": false, 00:14:58.944 "zone_management": false, 00:14:58.944 "zone_append": false, 00:14:58.944 "compare": false, 00:14:58.944 "compare_and_write": false, 00:14:58.944 "abort": true, 00:14:58.944 "seek_hole": false, 00:14:58.944 "seek_data": false, 00:14:58.944 "copy": true, 00:14:58.944 "nvme_iov_md": false 00:14:58.944 }, 00:14:58.944 "memory_domains": [ 00:14:58.944 { 00:14:58.944 "dma_device_id": "system", 00:14:58.944 "dma_device_type": 1 00:14:58.944 }, 00:14:58.944 { 00:14:58.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.944 "dma_device_type": 2 00:14:58.944 } 00:14:58.944 ], 00:14:58.944 "driver_specific": {} 00:14:58.944 } 00:14:58.944 ] 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.944 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.944 "name": "Existed_Raid", 00:14:58.944 "uuid": "0d00c1b4-d58b-4e5e-b0e0-a287495db635", 00:14:58.944 "strip_size_kb": 0, 00:14:58.944 "state": "configuring", 00:14:58.944 "raid_level": "raid1", 00:14:58.944 "superblock": true, 00:14:58.944 "num_base_bdevs": 4, 00:14:58.944 "num_base_bdevs_discovered": 3, 00:14:58.944 "num_base_bdevs_operational": 4, 00:14:58.944 "base_bdevs_list": [ 00:14:58.944 { 00:14:58.944 "name": "BaseBdev1", 00:14:58.944 "uuid": "b3486be5-7df6-4121-b3b3-6bd241feaf72", 00:14:58.944 "is_configured": true, 00:14:58.944 "data_offset": 2048, 00:14:58.944 "data_size": 63488 00:14:58.944 }, 00:14:58.944 { 00:14:58.944 "name": null, 00:14:58.944 "uuid": "d622b477-b582-4c95-a764-67465242ad97", 00:14:58.944 "is_configured": false, 00:14:58.944 "data_offset": 0, 00:14:58.944 "data_size": 63488 00:14:58.945 }, 00:14:58.945 { 00:14:58.945 "name": "BaseBdev3", 00:14:58.945 "uuid": "8af03d03-1f7b-4ae1-baae-c8caf2d94977", 00:14:58.945 "is_configured": true, 00:14:58.945 "data_offset": 2048, 00:14:58.945 "data_size": 63488 00:14:58.945 }, 00:14:58.945 { 00:14:58.945 "name": "BaseBdev4", 00:14:58.945 "uuid": "e82643da-2542-45c2-baca-be183da4a1bb", 00:14:58.945 "is_configured": true, 00:14:58.945 "data_offset": 2048, 00:14:58.945 "data_size": 63488 00:14:58.945 } 00:14:58.945 ] 00:14:58.945 }' 00:14:58.945 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.945 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.510 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.510 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.510 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.510 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:59.510 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.510 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:59.510 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:59.510 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.510 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.510 [2024-11-27 08:46:56.055963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:59.510 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.510 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:59.510 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.510 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.510 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.511 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.511 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.511 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.511 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.511 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.511 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.511 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.511 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.511 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.511 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.511 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.511 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.511 "name": "Existed_Raid", 00:14:59.511 "uuid": "0d00c1b4-d58b-4e5e-b0e0-a287495db635", 00:14:59.511 "strip_size_kb": 0, 00:14:59.511 "state": "configuring", 00:14:59.511 "raid_level": "raid1", 00:14:59.511 "superblock": true, 00:14:59.511 "num_base_bdevs": 4, 00:14:59.511 "num_base_bdevs_discovered": 2, 00:14:59.511 "num_base_bdevs_operational": 4, 00:14:59.511 "base_bdevs_list": [ 00:14:59.511 { 00:14:59.511 "name": "BaseBdev1", 00:14:59.511 "uuid": "b3486be5-7df6-4121-b3b3-6bd241feaf72", 00:14:59.511 "is_configured": true, 00:14:59.511 "data_offset": 2048, 00:14:59.511 "data_size": 63488 00:14:59.511 }, 00:14:59.511 { 00:14:59.511 "name": null, 00:14:59.511 "uuid": "d622b477-b582-4c95-a764-67465242ad97", 00:14:59.511 "is_configured": false, 00:14:59.511 "data_offset": 0, 00:14:59.511 "data_size": 63488 00:14:59.511 }, 00:14:59.511 { 00:14:59.511 "name": null, 00:14:59.511 "uuid": "8af03d03-1f7b-4ae1-baae-c8caf2d94977", 00:14:59.511 "is_configured": false, 00:14:59.511 "data_offset": 0, 00:14:59.511 "data_size": 63488 00:14:59.511 }, 00:14:59.511 { 00:14:59.511 "name": "BaseBdev4", 00:14:59.511 "uuid": "e82643da-2542-45c2-baca-be183da4a1bb", 00:14:59.511 "is_configured": true, 00:14:59.511 "data_offset": 2048, 00:14:59.511 "data_size": 63488 00:14:59.511 } 00:14:59.511 ] 00:14:59.511 }' 00:14:59.511 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.511 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.769 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.769 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:59.769 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.770 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.028 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.028 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:00.028 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:00.028 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.028 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.028 [2024-11-27 08:46:56.568085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.028 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.029 "name": "Existed_Raid", 00:15:00.029 "uuid": "0d00c1b4-d58b-4e5e-b0e0-a287495db635", 00:15:00.029 "strip_size_kb": 0, 00:15:00.029 "state": "configuring", 00:15:00.029 "raid_level": "raid1", 00:15:00.029 "superblock": true, 00:15:00.029 "num_base_bdevs": 4, 00:15:00.029 "num_base_bdevs_discovered": 3, 00:15:00.029 "num_base_bdevs_operational": 4, 00:15:00.029 "base_bdevs_list": [ 00:15:00.029 { 00:15:00.029 "name": "BaseBdev1", 00:15:00.029 "uuid": "b3486be5-7df6-4121-b3b3-6bd241feaf72", 00:15:00.029 "is_configured": true, 00:15:00.029 "data_offset": 2048, 00:15:00.029 "data_size": 63488 00:15:00.029 }, 00:15:00.029 { 00:15:00.029 "name": null, 00:15:00.029 "uuid": "d622b477-b582-4c95-a764-67465242ad97", 00:15:00.029 "is_configured": false, 00:15:00.029 "data_offset": 0, 00:15:00.029 "data_size": 63488 00:15:00.029 }, 00:15:00.029 { 00:15:00.029 "name": "BaseBdev3", 00:15:00.029 "uuid": "8af03d03-1f7b-4ae1-baae-c8caf2d94977", 00:15:00.029 "is_configured": true, 00:15:00.029 "data_offset": 2048, 00:15:00.029 "data_size": 63488 00:15:00.029 }, 00:15:00.029 { 00:15:00.029 "name": "BaseBdev4", 00:15:00.029 "uuid": "e82643da-2542-45c2-baca-be183da4a1bb", 00:15:00.029 "is_configured": true, 00:15:00.029 "data_offset": 2048, 00:15:00.029 "data_size": 63488 00:15:00.029 } 00:15:00.029 ] 00:15:00.029 }' 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.029 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.596 [2024-11-27 08:46:57.116296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.596 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.596 "name": "Existed_Raid", 00:15:00.596 "uuid": "0d00c1b4-d58b-4e5e-b0e0-a287495db635", 00:15:00.596 "strip_size_kb": 0, 00:15:00.596 "state": "configuring", 00:15:00.596 "raid_level": "raid1", 00:15:00.596 "superblock": true, 00:15:00.596 "num_base_bdevs": 4, 00:15:00.596 "num_base_bdevs_discovered": 2, 00:15:00.596 "num_base_bdevs_operational": 4, 00:15:00.596 "base_bdevs_list": [ 00:15:00.596 { 00:15:00.596 "name": null, 00:15:00.596 "uuid": "b3486be5-7df6-4121-b3b3-6bd241feaf72", 00:15:00.596 "is_configured": false, 00:15:00.596 "data_offset": 0, 00:15:00.596 "data_size": 63488 00:15:00.596 }, 00:15:00.596 { 00:15:00.596 "name": null, 00:15:00.596 "uuid": "d622b477-b582-4c95-a764-67465242ad97", 00:15:00.596 "is_configured": false, 00:15:00.596 "data_offset": 0, 00:15:00.596 "data_size": 63488 00:15:00.596 }, 00:15:00.596 { 00:15:00.596 "name": "BaseBdev3", 00:15:00.596 "uuid": "8af03d03-1f7b-4ae1-baae-c8caf2d94977", 00:15:00.596 "is_configured": true, 00:15:00.596 "data_offset": 2048, 00:15:00.597 "data_size": 63488 00:15:00.597 }, 00:15:00.597 { 00:15:00.597 "name": "BaseBdev4", 00:15:00.597 "uuid": "e82643da-2542-45c2-baca-be183da4a1bb", 00:15:00.597 "is_configured": true, 00:15:00.597 "data_offset": 2048, 00:15:00.597 "data_size": 63488 00:15:00.597 } 00:15:00.597 ] 00:15:00.597 }' 00:15:00.597 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.597 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.164 [2024-11-27 08:46:57.807589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.164 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.164 "name": "Existed_Raid", 00:15:01.164 "uuid": "0d00c1b4-d58b-4e5e-b0e0-a287495db635", 00:15:01.164 "strip_size_kb": 0, 00:15:01.164 "state": "configuring", 00:15:01.164 "raid_level": "raid1", 00:15:01.164 "superblock": true, 00:15:01.164 "num_base_bdevs": 4, 00:15:01.164 "num_base_bdevs_discovered": 3, 00:15:01.164 "num_base_bdevs_operational": 4, 00:15:01.164 "base_bdevs_list": [ 00:15:01.164 { 00:15:01.164 "name": null, 00:15:01.164 "uuid": "b3486be5-7df6-4121-b3b3-6bd241feaf72", 00:15:01.164 "is_configured": false, 00:15:01.164 "data_offset": 0, 00:15:01.164 "data_size": 63488 00:15:01.164 }, 00:15:01.164 { 00:15:01.164 "name": "BaseBdev2", 00:15:01.164 "uuid": "d622b477-b582-4c95-a764-67465242ad97", 00:15:01.164 "is_configured": true, 00:15:01.164 "data_offset": 2048, 00:15:01.164 "data_size": 63488 00:15:01.164 }, 00:15:01.164 { 00:15:01.164 "name": "BaseBdev3", 00:15:01.164 "uuid": "8af03d03-1f7b-4ae1-baae-c8caf2d94977", 00:15:01.164 "is_configured": true, 00:15:01.164 "data_offset": 2048, 00:15:01.164 "data_size": 63488 00:15:01.165 }, 00:15:01.165 { 00:15:01.165 "name": "BaseBdev4", 00:15:01.165 "uuid": "e82643da-2542-45c2-baca-be183da4a1bb", 00:15:01.165 "is_configured": true, 00:15:01.165 "data_offset": 2048, 00:15:01.165 "data_size": 63488 00:15:01.165 } 00:15:01.165 ] 00:15:01.165 }' 00:15:01.165 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.165 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b3486be5-7df6-4121-b3b3-6bd241feaf72 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.732 [2024-11-27 08:46:58.449163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:01.732 [2024-11-27 08:46:58.449570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:01.732 [2024-11-27 08:46:58.449597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:01.732 [2024-11-27 08:46:58.449946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:01.732 [2024-11-27 08:46:58.450164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:01.732 [2024-11-27 08:46:58.450181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:01.732 NewBaseBdev 00:15:01.732 [2024-11-27 08:46:58.450391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.732 [ 00:15:01.732 { 00:15:01.732 "name": "NewBaseBdev", 00:15:01.732 "aliases": [ 00:15:01.732 "b3486be5-7df6-4121-b3b3-6bd241feaf72" 00:15:01.732 ], 00:15:01.732 "product_name": "Malloc disk", 00:15:01.732 "block_size": 512, 00:15:01.732 "num_blocks": 65536, 00:15:01.732 "uuid": "b3486be5-7df6-4121-b3b3-6bd241feaf72", 00:15:01.732 "assigned_rate_limits": { 00:15:01.732 "rw_ios_per_sec": 0, 00:15:01.732 "rw_mbytes_per_sec": 0, 00:15:01.732 "r_mbytes_per_sec": 0, 00:15:01.732 "w_mbytes_per_sec": 0 00:15:01.732 }, 00:15:01.732 "claimed": true, 00:15:01.732 "claim_type": "exclusive_write", 00:15:01.732 "zoned": false, 00:15:01.732 "supported_io_types": { 00:15:01.732 "read": true, 00:15:01.732 "write": true, 00:15:01.732 "unmap": true, 00:15:01.732 "flush": true, 00:15:01.732 "reset": true, 00:15:01.732 "nvme_admin": false, 00:15:01.732 "nvme_io": false, 00:15:01.732 "nvme_io_md": false, 00:15:01.732 "write_zeroes": true, 00:15:01.732 "zcopy": true, 00:15:01.732 "get_zone_info": false, 00:15:01.732 "zone_management": false, 00:15:01.732 "zone_append": false, 00:15:01.732 "compare": false, 00:15:01.732 "compare_and_write": false, 00:15:01.732 "abort": true, 00:15:01.732 "seek_hole": false, 00:15:01.732 "seek_data": false, 00:15:01.732 "copy": true, 00:15:01.732 "nvme_iov_md": false 00:15:01.732 }, 00:15:01.732 "memory_domains": [ 00:15:01.732 { 00:15:01.732 "dma_device_id": "system", 00:15:01.732 "dma_device_type": 1 00:15:01.732 }, 00:15:01.732 { 00:15:01.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.732 "dma_device_type": 2 00:15:01.732 } 00:15:01.732 ], 00:15:01.732 "driver_specific": {} 00:15:01.732 } 00:15:01.732 ] 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.732 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.992 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.992 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.992 "name": "Existed_Raid", 00:15:01.992 "uuid": "0d00c1b4-d58b-4e5e-b0e0-a287495db635", 00:15:01.992 "strip_size_kb": 0, 00:15:01.992 "state": "online", 00:15:01.992 "raid_level": "raid1", 00:15:01.992 "superblock": true, 00:15:01.992 "num_base_bdevs": 4, 00:15:01.992 "num_base_bdevs_discovered": 4, 00:15:01.992 "num_base_bdevs_operational": 4, 00:15:01.992 "base_bdevs_list": [ 00:15:01.992 { 00:15:01.992 "name": "NewBaseBdev", 00:15:01.992 "uuid": "b3486be5-7df6-4121-b3b3-6bd241feaf72", 00:15:01.992 "is_configured": true, 00:15:01.992 "data_offset": 2048, 00:15:01.992 "data_size": 63488 00:15:01.992 }, 00:15:01.992 { 00:15:01.992 "name": "BaseBdev2", 00:15:01.992 "uuid": "d622b477-b582-4c95-a764-67465242ad97", 00:15:01.992 "is_configured": true, 00:15:01.992 "data_offset": 2048, 00:15:01.992 "data_size": 63488 00:15:01.992 }, 00:15:01.992 { 00:15:01.992 "name": "BaseBdev3", 00:15:01.992 "uuid": "8af03d03-1f7b-4ae1-baae-c8caf2d94977", 00:15:01.992 "is_configured": true, 00:15:01.992 "data_offset": 2048, 00:15:01.992 "data_size": 63488 00:15:01.992 }, 00:15:01.992 { 00:15:01.992 "name": "BaseBdev4", 00:15:01.992 "uuid": "e82643da-2542-45c2-baca-be183da4a1bb", 00:15:01.992 "is_configured": true, 00:15:01.992 "data_offset": 2048, 00:15:01.992 "data_size": 63488 00:15:01.992 } 00:15:01.992 ] 00:15:01.992 }' 00:15:01.992 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.992 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.559 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:02.559 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:02.559 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:02.559 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:02.559 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:02.559 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:02.559 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:02.559 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:02.559 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.559 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.559 [2024-11-27 08:46:59.021833] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.559 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.559 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:02.559 "name": "Existed_Raid", 00:15:02.559 "aliases": [ 00:15:02.559 "0d00c1b4-d58b-4e5e-b0e0-a287495db635" 00:15:02.559 ], 00:15:02.559 "product_name": "Raid Volume", 00:15:02.559 "block_size": 512, 00:15:02.559 "num_blocks": 63488, 00:15:02.559 "uuid": "0d00c1b4-d58b-4e5e-b0e0-a287495db635", 00:15:02.559 "assigned_rate_limits": { 00:15:02.559 "rw_ios_per_sec": 0, 00:15:02.559 "rw_mbytes_per_sec": 0, 00:15:02.559 "r_mbytes_per_sec": 0, 00:15:02.559 "w_mbytes_per_sec": 0 00:15:02.559 }, 00:15:02.559 "claimed": false, 00:15:02.559 "zoned": false, 00:15:02.559 "supported_io_types": { 00:15:02.559 "read": true, 00:15:02.559 "write": true, 00:15:02.559 "unmap": false, 00:15:02.559 "flush": false, 00:15:02.559 "reset": true, 00:15:02.559 "nvme_admin": false, 00:15:02.559 "nvme_io": false, 00:15:02.559 "nvme_io_md": false, 00:15:02.559 "write_zeroes": true, 00:15:02.559 "zcopy": false, 00:15:02.560 "get_zone_info": false, 00:15:02.560 "zone_management": false, 00:15:02.560 "zone_append": false, 00:15:02.560 "compare": false, 00:15:02.560 "compare_and_write": false, 00:15:02.560 "abort": false, 00:15:02.560 "seek_hole": false, 00:15:02.560 "seek_data": false, 00:15:02.560 "copy": false, 00:15:02.560 "nvme_iov_md": false 00:15:02.560 }, 00:15:02.560 "memory_domains": [ 00:15:02.560 { 00:15:02.560 "dma_device_id": "system", 00:15:02.560 "dma_device_type": 1 00:15:02.560 }, 00:15:02.560 { 00:15:02.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.560 "dma_device_type": 2 00:15:02.560 }, 00:15:02.560 { 00:15:02.560 "dma_device_id": "system", 00:15:02.560 "dma_device_type": 1 00:15:02.560 }, 00:15:02.560 { 00:15:02.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.560 "dma_device_type": 2 00:15:02.560 }, 00:15:02.560 { 00:15:02.560 "dma_device_id": "system", 00:15:02.560 "dma_device_type": 1 00:15:02.560 }, 00:15:02.560 { 00:15:02.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.560 "dma_device_type": 2 00:15:02.560 }, 00:15:02.560 { 00:15:02.560 "dma_device_id": "system", 00:15:02.560 "dma_device_type": 1 00:15:02.560 }, 00:15:02.560 { 00:15:02.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.560 "dma_device_type": 2 00:15:02.560 } 00:15:02.560 ], 00:15:02.560 "driver_specific": { 00:15:02.560 "raid": { 00:15:02.560 "uuid": "0d00c1b4-d58b-4e5e-b0e0-a287495db635", 00:15:02.560 "strip_size_kb": 0, 00:15:02.560 "state": "online", 00:15:02.560 "raid_level": "raid1", 00:15:02.560 "superblock": true, 00:15:02.560 "num_base_bdevs": 4, 00:15:02.560 "num_base_bdevs_discovered": 4, 00:15:02.560 "num_base_bdevs_operational": 4, 00:15:02.560 "base_bdevs_list": [ 00:15:02.560 { 00:15:02.560 "name": "NewBaseBdev", 00:15:02.560 "uuid": "b3486be5-7df6-4121-b3b3-6bd241feaf72", 00:15:02.560 "is_configured": true, 00:15:02.560 "data_offset": 2048, 00:15:02.560 "data_size": 63488 00:15:02.560 }, 00:15:02.560 { 00:15:02.560 "name": "BaseBdev2", 00:15:02.560 "uuid": "d622b477-b582-4c95-a764-67465242ad97", 00:15:02.560 "is_configured": true, 00:15:02.560 "data_offset": 2048, 00:15:02.560 "data_size": 63488 00:15:02.560 }, 00:15:02.560 { 00:15:02.560 "name": "BaseBdev3", 00:15:02.560 "uuid": "8af03d03-1f7b-4ae1-baae-c8caf2d94977", 00:15:02.560 "is_configured": true, 00:15:02.560 "data_offset": 2048, 00:15:02.560 "data_size": 63488 00:15:02.560 }, 00:15:02.560 { 00:15:02.560 "name": "BaseBdev4", 00:15:02.560 "uuid": "e82643da-2542-45c2-baca-be183da4a1bb", 00:15:02.560 "is_configured": true, 00:15:02.560 "data_offset": 2048, 00:15:02.560 "data_size": 63488 00:15:02.560 } 00:15:02.560 ] 00:15:02.560 } 00:15:02.560 } 00:15:02.560 }' 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:02.560 BaseBdev2 00:15:02.560 BaseBdev3 00:15:02.560 BaseBdev4' 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.560 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.819 [2024-11-27 08:46:59.389505] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:02.819 [2024-11-27 08:46:59.389549] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.819 [2024-11-27 08:46:59.389688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.819 [2024-11-27 08:46:59.390114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.819 [2024-11-27 08:46:59.390139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74159 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' -z 74159 ']' 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # kill -0 74159 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # uname 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 74159 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:15:02.819 killing process with pid 74159 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # echo 'killing process with pid 74159' 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # kill 74159 00:15:02.819 [2024-11-27 08:46:59.430821] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.819 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@975 -- # wait 74159 00:15:03.077 [2024-11-27 08:46:59.811864] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.453 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:04.453 00:15:04.453 real 0m12.657s 00:15:04.453 user 0m20.759s 00:15:04.453 sys 0m1.828s 00:15:04.453 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # xtrace_disable 00:15:04.453 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.453 ************************************ 00:15:04.453 END TEST raid_state_function_test_sb 00:15:04.453 ************************************ 00:15:04.453 08:47:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:15:04.453 08:47:00 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:15:04.453 08:47:00 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:15:04.453 08:47:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:04.453 ************************************ 00:15:04.453 START TEST raid_superblock_test 00:15:04.453 ************************************ 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # raid_superblock_test raid1 4 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:04.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74836 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74836 00:15:04.453 08:47:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:04.454 08:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # '[' -z 74836 ']' 00:15:04.454 08:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.454 08:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:15:04.454 08:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.454 08:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:15:04.454 08:47:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.454 [2024-11-27 08:47:01.077301] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:15:04.454 [2024-11-27 08:47:01.077486] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74836 ] 00:15:04.713 [2024-11-27 08:47:01.252707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.713 [2024-11-27 08:47:01.398773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.972 [2024-11-27 08:47:01.621002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.972 [2024-11-27 08:47:01.621351] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@865 -- # return 0 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.540 malloc1 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.540 [2024-11-27 08:47:02.059459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:05.540 [2024-11-27 08:47:02.059546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.540 [2024-11-27 08:47:02.059595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:05.540 [2024-11-27 08:47:02.059613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.540 [2024-11-27 08:47:02.062607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.540 [2024-11-27 08:47:02.062800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:05.540 pt1 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.540 malloc2 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.540 [2024-11-27 08:47:02.115126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:05.540 [2024-11-27 08:47:02.115199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.540 [2024-11-27 08:47:02.115233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:05.540 [2024-11-27 08:47:02.115248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.540 [2024-11-27 08:47:02.118143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.540 [2024-11-27 08:47:02.118356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:05.540 pt2 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.540 malloc3 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.540 [2024-11-27 08:47:02.185006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:05.540 [2024-11-27 08:47:02.185079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.540 [2024-11-27 08:47:02.185115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:05.540 [2024-11-27 08:47:02.185131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.540 [2024-11-27 08:47:02.188091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.540 [2024-11-27 08:47:02.188140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:05.540 pt3 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:05.540 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.541 malloc4 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.541 [2024-11-27 08:47:02.244828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:05.541 [2024-11-27 08:47:02.244922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.541 [2024-11-27 08:47:02.244952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:05.541 [2024-11-27 08:47:02.244968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.541 [2024-11-27 08:47:02.247893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.541 [2024-11-27 08:47:02.247940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:05.541 pt4 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.541 [2024-11-27 08:47:02.256867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:05.541 [2024-11-27 08:47:02.259453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:05.541 [2024-11-27 08:47:02.259551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:05.541 [2024-11-27 08:47:02.259625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:05.541 [2024-11-27 08:47:02.259885] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:05.541 [2024-11-27 08:47:02.259909] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:05.541 [2024-11-27 08:47:02.260261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:05.541 [2024-11-27 08:47:02.260527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:05.541 [2024-11-27 08:47:02.260560] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:05.541 [2024-11-27 08:47:02.260797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.541 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.801 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.801 "name": "raid_bdev1", 00:15:05.801 "uuid": "ea89efa1-8270-45c6-b55c-add4549ac5b9", 00:15:05.801 "strip_size_kb": 0, 00:15:05.801 "state": "online", 00:15:05.801 "raid_level": "raid1", 00:15:05.801 "superblock": true, 00:15:05.801 "num_base_bdevs": 4, 00:15:05.801 "num_base_bdevs_discovered": 4, 00:15:05.801 "num_base_bdevs_operational": 4, 00:15:05.801 "base_bdevs_list": [ 00:15:05.801 { 00:15:05.801 "name": "pt1", 00:15:05.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:05.801 "is_configured": true, 00:15:05.801 "data_offset": 2048, 00:15:05.801 "data_size": 63488 00:15:05.801 }, 00:15:05.801 { 00:15:05.801 "name": "pt2", 00:15:05.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.801 "is_configured": true, 00:15:05.801 "data_offset": 2048, 00:15:05.801 "data_size": 63488 00:15:05.801 }, 00:15:05.801 { 00:15:05.801 "name": "pt3", 00:15:05.801 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.801 "is_configured": true, 00:15:05.801 "data_offset": 2048, 00:15:05.801 "data_size": 63488 00:15:05.801 }, 00:15:05.801 { 00:15:05.801 "name": "pt4", 00:15:05.801 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:05.801 "is_configured": true, 00:15:05.801 "data_offset": 2048, 00:15:05.801 "data_size": 63488 00:15:05.801 } 00:15:05.801 ] 00:15:05.801 }' 00:15:05.801 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.801 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.061 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:06.061 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:06.061 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:06.061 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:06.061 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:06.061 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:06.061 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:06.061 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.061 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:06.061 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.061 [2024-11-27 08:47:02.769455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.061 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.061 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:06.061 "name": "raid_bdev1", 00:15:06.061 "aliases": [ 00:15:06.061 "ea89efa1-8270-45c6-b55c-add4549ac5b9" 00:15:06.061 ], 00:15:06.061 "product_name": "Raid Volume", 00:15:06.061 "block_size": 512, 00:15:06.061 "num_blocks": 63488, 00:15:06.061 "uuid": "ea89efa1-8270-45c6-b55c-add4549ac5b9", 00:15:06.061 "assigned_rate_limits": { 00:15:06.061 "rw_ios_per_sec": 0, 00:15:06.061 "rw_mbytes_per_sec": 0, 00:15:06.061 "r_mbytes_per_sec": 0, 00:15:06.061 "w_mbytes_per_sec": 0 00:15:06.061 }, 00:15:06.061 "claimed": false, 00:15:06.061 "zoned": false, 00:15:06.061 "supported_io_types": { 00:15:06.061 "read": true, 00:15:06.061 "write": true, 00:15:06.061 "unmap": false, 00:15:06.061 "flush": false, 00:15:06.061 "reset": true, 00:15:06.061 "nvme_admin": false, 00:15:06.061 "nvme_io": false, 00:15:06.061 "nvme_io_md": false, 00:15:06.061 "write_zeroes": true, 00:15:06.061 "zcopy": false, 00:15:06.061 "get_zone_info": false, 00:15:06.061 "zone_management": false, 00:15:06.061 "zone_append": false, 00:15:06.061 "compare": false, 00:15:06.061 "compare_and_write": false, 00:15:06.061 "abort": false, 00:15:06.061 "seek_hole": false, 00:15:06.061 "seek_data": false, 00:15:06.061 "copy": false, 00:15:06.061 "nvme_iov_md": false 00:15:06.061 }, 00:15:06.061 "memory_domains": [ 00:15:06.061 { 00:15:06.061 "dma_device_id": "system", 00:15:06.061 "dma_device_type": 1 00:15:06.061 }, 00:15:06.061 { 00:15:06.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.061 "dma_device_type": 2 00:15:06.061 }, 00:15:06.061 { 00:15:06.061 "dma_device_id": "system", 00:15:06.061 "dma_device_type": 1 00:15:06.061 }, 00:15:06.061 { 00:15:06.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.061 "dma_device_type": 2 00:15:06.061 }, 00:15:06.061 { 00:15:06.061 "dma_device_id": "system", 00:15:06.061 "dma_device_type": 1 00:15:06.061 }, 00:15:06.061 { 00:15:06.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.061 "dma_device_type": 2 00:15:06.061 }, 00:15:06.061 { 00:15:06.061 "dma_device_id": "system", 00:15:06.061 "dma_device_type": 1 00:15:06.061 }, 00:15:06.061 { 00:15:06.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.061 "dma_device_type": 2 00:15:06.061 } 00:15:06.061 ], 00:15:06.061 "driver_specific": { 00:15:06.061 "raid": { 00:15:06.061 "uuid": "ea89efa1-8270-45c6-b55c-add4549ac5b9", 00:15:06.061 "strip_size_kb": 0, 00:15:06.061 "state": "online", 00:15:06.061 "raid_level": "raid1", 00:15:06.061 "superblock": true, 00:15:06.061 "num_base_bdevs": 4, 00:15:06.061 "num_base_bdevs_discovered": 4, 00:15:06.061 "num_base_bdevs_operational": 4, 00:15:06.061 "base_bdevs_list": [ 00:15:06.061 { 00:15:06.061 "name": "pt1", 00:15:06.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.061 "is_configured": true, 00:15:06.061 "data_offset": 2048, 00:15:06.061 "data_size": 63488 00:15:06.061 }, 00:15:06.061 { 00:15:06.061 "name": "pt2", 00:15:06.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.061 "is_configured": true, 00:15:06.061 "data_offset": 2048, 00:15:06.061 "data_size": 63488 00:15:06.061 }, 00:15:06.061 { 00:15:06.061 "name": "pt3", 00:15:06.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.061 "is_configured": true, 00:15:06.061 "data_offset": 2048, 00:15:06.061 "data_size": 63488 00:15:06.061 }, 00:15:06.061 { 00:15:06.061 "name": "pt4", 00:15:06.061 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:06.061 "is_configured": true, 00:15:06.061 "data_offset": 2048, 00:15:06.061 "data_size": 63488 00:15:06.061 } 00:15:06.061 ] 00:15:06.061 } 00:15:06.061 } 00:15:06.061 }' 00:15:06.061 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:06.320 pt2 00:15:06.320 pt3 00:15:06.320 pt4' 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.320 08:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.320 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.320 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.320 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.320 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.320 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:06.320 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.320 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.320 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.320 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.320 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.320 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.320 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.320 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:06.320 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.320 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.578 [2024-11-27 08:47:03.129490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ea89efa1-8270-45c6-b55c-add4549ac5b9 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ea89efa1-8270-45c6-b55c-add4549ac5b9 ']' 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.578 [2024-11-27 08:47:03.181108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.578 [2024-11-27 08:47:03.181273] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.578 [2024-11-27 08:47:03.181440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.578 [2024-11-27 08:47:03.181569] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.578 [2024-11-27 08:47:03.181596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.578 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.579 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.579 [2024-11-27 08:47:03.333134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:06.579 [2024-11-27 08:47:03.335779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:06.837 [2024-11-27 08:47:03.335995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:06.837 [2024-11-27 08:47:03.336068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:06.837 [2024-11-27 08:47:03.336155] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:06.837 [2024-11-27 08:47:03.336245] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:06.837 [2024-11-27 08:47:03.336283] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:06.837 [2024-11-27 08:47:03.336318] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:06.837 [2024-11-27 08:47:03.336361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.837 [2024-11-27 08:47:03.336382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:06.837 request: 00:15:06.837 { 00:15:06.837 "name": "raid_bdev1", 00:15:06.837 "raid_level": "raid1", 00:15:06.837 "base_bdevs": [ 00:15:06.837 "malloc1", 00:15:06.837 "malloc2", 00:15:06.837 "malloc3", 00:15:06.837 "malloc4" 00:15:06.837 ], 00:15:06.837 "superblock": false, 00:15:06.837 "method": "bdev_raid_create", 00:15:06.837 "req_id": 1 00:15:06.837 } 00:15:06.837 Got JSON-RPC error response 00:15:06.837 response: 00:15:06.837 { 00:15:06.837 "code": -17, 00:15:06.837 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:06.837 } 00:15:06.837 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:06.837 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.838 [2024-11-27 08:47:03.393208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:06.838 [2024-11-27 08:47:03.393425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.838 [2024-11-27 08:47:03.393498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:06.838 [2024-11-27 08:47:03.393682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.838 [2024-11-27 08:47:03.396823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.838 [2024-11-27 08:47:03.396998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:06.838 [2024-11-27 08:47:03.397214] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:06.838 [2024-11-27 08:47:03.397430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:06.838 pt1 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.838 "name": "raid_bdev1", 00:15:06.838 "uuid": "ea89efa1-8270-45c6-b55c-add4549ac5b9", 00:15:06.838 "strip_size_kb": 0, 00:15:06.838 "state": "configuring", 00:15:06.838 "raid_level": "raid1", 00:15:06.838 "superblock": true, 00:15:06.838 "num_base_bdevs": 4, 00:15:06.838 "num_base_bdevs_discovered": 1, 00:15:06.838 "num_base_bdevs_operational": 4, 00:15:06.838 "base_bdevs_list": [ 00:15:06.838 { 00:15:06.838 "name": "pt1", 00:15:06.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.838 "is_configured": true, 00:15:06.838 "data_offset": 2048, 00:15:06.838 "data_size": 63488 00:15:06.838 }, 00:15:06.838 { 00:15:06.838 "name": null, 00:15:06.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.838 "is_configured": false, 00:15:06.838 "data_offset": 2048, 00:15:06.838 "data_size": 63488 00:15:06.838 }, 00:15:06.838 { 00:15:06.838 "name": null, 00:15:06.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.838 "is_configured": false, 00:15:06.838 "data_offset": 2048, 00:15:06.838 "data_size": 63488 00:15:06.838 }, 00:15:06.838 { 00:15:06.838 "name": null, 00:15:06.838 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:06.838 "is_configured": false, 00:15:06.838 "data_offset": 2048, 00:15:06.838 "data_size": 63488 00:15:06.838 } 00:15:06.838 ] 00:15:06.838 }' 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.838 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.407 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:07.407 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:07.407 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.407 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.407 [2024-11-27 08:47:03.877507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:07.407 [2024-11-27 08:47:03.877612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.407 [2024-11-27 08:47:03.877647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:07.407 [2024-11-27 08:47:03.877667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.407 [2024-11-27 08:47:03.878321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.407 [2024-11-27 08:47:03.878376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:07.407 [2024-11-27 08:47:03.878500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:07.407 [2024-11-27 08:47:03.878572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.407 pt2 00:15:07.407 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.408 [2024-11-27 08:47:03.885460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.408 "name": "raid_bdev1", 00:15:07.408 "uuid": "ea89efa1-8270-45c6-b55c-add4549ac5b9", 00:15:07.408 "strip_size_kb": 0, 00:15:07.408 "state": "configuring", 00:15:07.408 "raid_level": "raid1", 00:15:07.408 "superblock": true, 00:15:07.408 "num_base_bdevs": 4, 00:15:07.408 "num_base_bdevs_discovered": 1, 00:15:07.408 "num_base_bdevs_operational": 4, 00:15:07.408 "base_bdevs_list": [ 00:15:07.408 { 00:15:07.408 "name": "pt1", 00:15:07.408 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.408 "is_configured": true, 00:15:07.408 "data_offset": 2048, 00:15:07.408 "data_size": 63488 00:15:07.408 }, 00:15:07.408 { 00:15:07.408 "name": null, 00:15:07.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.408 "is_configured": false, 00:15:07.408 "data_offset": 0, 00:15:07.408 "data_size": 63488 00:15:07.408 }, 00:15:07.408 { 00:15:07.408 "name": null, 00:15:07.408 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.408 "is_configured": false, 00:15:07.408 "data_offset": 2048, 00:15:07.408 "data_size": 63488 00:15:07.408 }, 00:15:07.408 { 00:15:07.408 "name": null, 00:15:07.408 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:07.408 "is_configured": false, 00:15:07.408 "data_offset": 2048, 00:15:07.408 "data_size": 63488 00:15:07.408 } 00:15:07.408 ] 00:15:07.408 }' 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.408 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.673 [2024-11-27 08:47:04.389612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:07.673 [2024-11-27 08:47:04.389702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.673 [2024-11-27 08:47:04.389746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:07.673 [2024-11-27 08:47:04.389767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.673 [2024-11-27 08:47:04.390454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.673 [2024-11-27 08:47:04.390482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:07.673 [2024-11-27 08:47:04.390615] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:07.673 [2024-11-27 08:47:04.390652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.673 pt2 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.673 [2024-11-27 08:47:04.397554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:07.673 [2024-11-27 08:47:04.397756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.673 [2024-11-27 08:47:04.397797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:07.673 [2024-11-27 08:47:04.397812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.673 [2024-11-27 08:47:04.398291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.673 [2024-11-27 08:47:04.398327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:07.673 [2024-11-27 08:47:04.398431] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:07.673 [2024-11-27 08:47:04.398461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:07.673 pt3 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.673 [2024-11-27 08:47:04.405533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:07.673 [2024-11-27 08:47:04.405586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.673 [2024-11-27 08:47:04.405613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:07.673 [2024-11-27 08:47:04.405627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.673 [2024-11-27 08:47:04.406087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.673 [2024-11-27 08:47:04.406119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:07.673 [2024-11-27 08:47:04.406201] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:07.673 [2024-11-27 08:47:04.406229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:07.673 [2024-11-27 08:47:04.406451] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:07.673 [2024-11-27 08:47:04.406469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:07.673 [2024-11-27 08:47:04.406809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:07.673 [2024-11-27 08:47:04.407018] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:07.673 [2024-11-27 08:47:04.407039] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:07.673 [2024-11-27 08:47:04.407208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.673 pt4 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.673 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.944 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.944 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.944 "name": "raid_bdev1", 00:15:07.944 "uuid": "ea89efa1-8270-45c6-b55c-add4549ac5b9", 00:15:07.944 "strip_size_kb": 0, 00:15:07.944 "state": "online", 00:15:07.944 "raid_level": "raid1", 00:15:07.944 "superblock": true, 00:15:07.944 "num_base_bdevs": 4, 00:15:07.944 "num_base_bdevs_discovered": 4, 00:15:07.944 "num_base_bdevs_operational": 4, 00:15:07.944 "base_bdevs_list": [ 00:15:07.944 { 00:15:07.944 "name": "pt1", 00:15:07.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.944 "is_configured": true, 00:15:07.944 "data_offset": 2048, 00:15:07.944 "data_size": 63488 00:15:07.944 }, 00:15:07.944 { 00:15:07.944 "name": "pt2", 00:15:07.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.944 "is_configured": true, 00:15:07.944 "data_offset": 2048, 00:15:07.944 "data_size": 63488 00:15:07.944 }, 00:15:07.944 { 00:15:07.944 "name": "pt3", 00:15:07.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.944 "is_configured": true, 00:15:07.944 "data_offset": 2048, 00:15:07.944 "data_size": 63488 00:15:07.944 }, 00:15:07.944 { 00:15:07.944 "name": "pt4", 00:15:07.944 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:07.944 "is_configured": true, 00:15:07.944 "data_offset": 2048, 00:15:07.944 "data_size": 63488 00:15:07.944 } 00:15:07.944 ] 00:15:07.944 }' 00:15:07.944 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.944 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.202 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:08.202 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:08.202 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:08.202 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:08.202 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:08.202 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:08.202 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:08.202 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.202 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.202 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:08.202 [2024-11-27 08:47:04.914198] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.202 08:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.202 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:08.202 "name": "raid_bdev1", 00:15:08.202 "aliases": [ 00:15:08.202 "ea89efa1-8270-45c6-b55c-add4549ac5b9" 00:15:08.202 ], 00:15:08.202 "product_name": "Raid Volume", 00:15:08.202 "block_size": 512, 00:15:08.202 "num_blocks": 63488, 00:15:08.202 "uuid": "ea89efa1-8270-45c6-b55c-add4549ac5b9", 00:15:08.202 "assigned_rate_limits": { 00:15:08.202 "rw_ios_per_sec": 0, 00:15:08.202 "rw_mbytes_per_sec": 0, 00:15:08.202 "r_mbytes_per_sec": 0, 00:15:08.202 "w_mbytes_per_sec": 0 00:15:08.202 }, 00:15:08.202 "claimed": false, 00:15:08.202 "zoned": false, 00:15:08.202 "supported_io_types": { 00:15:08.202 "read": true, 00:15:08.202 "write": true, 00:15:08.202 "unmap": false, 00:15:08.202 "flush": false, 00:15:08.202 "reset": true, 00:15:08.202 "nvme_admin": false, 00:15:08.202 "nvme_io": false, 00:15:08.202 "nvme_io_md": false, 00:15:08.202 "write_zeroes": true, 00:15:08.202 "zcopy": false, 00:15:08.202 "get_zone_info": false, 00:15:08.202 "zone_management": false, 00:15:08.202 "zone_append": false, 00:15:08.202 "compare": false, 00:15:08.202 "compare_and_write": false, 00:15:08.202 "abort": false, 00:15:08.202 "seek_hole": false, 00:15:08.202 "seek_data": false, 00:15:08.202 "copy": false, 00:15:08.202 "nvme_iov_md": false 00:15:08.202 }, 00:15:08.202 "memory_domains": [ 00:15:08.202 { 00:15:08.202 "dma_device_id": "system", 00:15:08.202 "dma_device_type": 1 00:15:08.202 }, 00:15:08.202 { 00:15:08.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.202 "dma_device_type": 2 00:15:08.202 }, 00:15:08.202 { 00:15:08.202 "dma_device_id": "system", 00:15:08.202 "dma_device_type": 1 00:15:08.202 }, 00:15:08.202 { 00:15:08.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.202 "dma_device_type": 2 00:15:08.202 }, 00:15:08.202 { 00:15:08.202 "dma_device_id": "system", 00:15:08.202 "dma_device_type": 1 00:15:08.202 }, 00:15:08.202 { 00:15:08.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.202 "dma_device_type": 2 00:15:08.202 }, 00:15:08.202 { 00:15:08.202 "dma_device_id": "system", 00:15:08.202 "dma_device_type": 1 00:15:08.202 }, 00:15:08.202 { 00:15:08.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.202 "dma_device_type": 2 00:15:08.203 } 00:15:08.203 ], 00:15:08.203 "driver_specific": { 00:15:08.203 "raid": { 00:15:08.203 "uuid": "ea89efa1-8270-45c6-b55c-add4549ac5b9", 00:15:08.203 "strip_size_kb": 0, 00:15:08.203 "state": "online", 00:15:08.203 "raid_level": "raid1", 00:15:08.203 "superblock": true, 00:15:08.203 "num_base_bdevs": 4, 00:15:08.203 "num_base_bdevs_discovered": 4, 00:15:08.203 "num_base_bdevs_operational": 4, 00:15:08.203 "base_bdevs_list": [ 00:15:08.203 { 00:15:08.203 "name": "pt1", 00:15:08.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.203 "is_configured": true, 00:15:08.203 "data_offset": 2048, 00:15:08.203 "data_size": 63488 00:15:08.203 }, 00:15:08.203 { 00:15:08.203 "name": "pt2", 00:15:08.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.203 "is_configured": true, 00:15:08.203 "data_offset": 2048, 00:15:08.203 "data_size": 63488 00:15:08.203 }, 00:15:08.203 { 00:15:08.203 "name": "pt3", 00:15:08.203 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.203 "is_configured": true, 00:15:08.203 "data_offset": 2048, 00:15:08.203 "data_size": 63488 00:15:08.203 }, 00:15:08.203 { 00:15:08.203 "name": "pt4", 00:15:08.203 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:08.203 "is_configured": true, 00:15:08.203 "data_offset": 2048, 00:15:08.203 "data_size": 63488 00:15:08.203 } 00:15:08.203 ] 00:15:08.203 } 00:15:08.203 } 00:15:08.203 }' 00:15:08.203 08:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:08.461 pt2 00:15:08.461 pt3 00:15:08.461 pt4' 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.461 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.720 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.720 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.720 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.720 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:08.720 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.720 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.720 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:08.720 [2024-11-27 08:47:05.270206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.720 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.720 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ea89efa1-8270-45c6-b55c-add4549ac5b9 '!=' ea89efa1-8270-45c6-b55c-add4549ac5b9 ']' 00:15:08.720 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:08.720 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:08.720 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:08.720 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:08.720 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.720 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.720 [2024-11-27 08:47:05.321912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.721 "name": "raid_bdev1", 00:15:08.721 "uuid": "ea89efa1-8270-45c6-b55c-add4549ac5b9", 00:15:08.721 "strip_size_kb": 0, 00:15:08.721 "state": "online", 00:15:08.721 "raid_level": "raid1", 00:15:08.721 "superblock": true, 00:15:08.721 "num_base_bdevs": 4, 00:15:08.721 "num_base_bdevs_discovered": 3, 00:15:08.721 "num_base_bdevs_operational": 3, 00:15:08.721 "base_bdevs_list": [ 00:15:08.721 { 00:15:08.721 "name": null, 00:15:08.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.721 "is_configured": false, 00:15:08.721 "data_offset": 0, 00:15:08.721 "data_size": 63488 00:15:08.721 }, 00:15:08.721 { 00:15:08.721 "name": "pt2", 00:15:08.721 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.721 "is_configured": true, 00:15:08.721 "data_offset": 2048, 00:15:08.721 "data_size": 63488 00:15:08.721 }, 00:15:08.721 { 00:15:08.721 "name": "pt3", 00:15:08.721 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.721 "is_configured": true, 00:15:08.721 "data_offset": 2048, 00:15:08.721 "data_size": 63488 00:15:08.721 }, 00:15:08.721 { 00:15:08.721 "name": "pt4", 00:15:08.721 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:08.721 "is_configured": true, 00:15:08.721 "data_offset": 2048, 00:15:08.721 "data_size": 63488 00:15:08.721 } 00:15:08.721 ] 00:15:08.721 }' 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.721 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.287 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:09.287 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.287 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.287 [2024-11-27 08:47:05.841965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.287 [2024-11-27 08:47:05.842011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.287 [2024-11-27 08:47:05.842131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.287 [2024-11-27 08:47:05.842251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.287 [2024-11-27 08:47:05.842282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:09.287 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.287 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.287 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:09.287 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.287 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.287 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.287 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.288 [2024-11-27 08:47:05.929936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.288 [2024-11-27 08:47:05.930009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.288 [2024-11-27 08:47:05.930040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:09.288 [2024-11-27 08:47:05.930056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.288 [2024-11-27 08:47:05.933218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.288 [2024-11-27 08:47:05.933411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.288 [2024-11-27 08:47:05.933561] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:09.288 [2024-11-27 08:47:05.933630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.288 pt2 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.288 "name": "raid_bdev1", 00:15:09.288 "uuid": "ea89efa1-8270-45c6-b55c-add4549ac5b9", 00:15:09.288 "strip_size_kb": 0, 00:15:09.288 "state": "configuring", 00:15:09.288 "raid_level": "raid1", 00:15:09.288 "superblock": true, 00:15:09.288 "num_base_bdevs": 4, 00:15:09.288 "num_base_bdevs_discovered": 1, 00:15:09.288 "num_base_bdevs_operational": 3, 00:15:09.288 "base_bdevs_list": [ 00:15:09.288 { 00:15:09.288 "name": null, 00:15:09.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.288 "is_configured": false, 00:15:09.288 "data_offset": 2048, 00:15:09.288 "data_size": 63488 00:15:09.288 }, 00:15:09.288 { 00:15:09.288 "name": "pt2", 00:15:09.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.288 "is_configured": true, 00:15:09.288 "data_offset": 2048, 00:15:09.288 "data_size": 63488 00:15:09.288 }, 00:15:09.288 { 00:15:09.288 "name": null, 00:15:09.288 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.288 "is_configured": false, 00:15:09.288 "data_offset": 2048, 00:15:09.288 "data_size": 63488 00:15:09.288 }, 00:15:09.288 { 00:15:09.288 "name": null, 00:15:09.288 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:09.288 "is_configured": false, 00:15:09.288 "data_offset": 2048, 00:15:09.288 "data_size": 63488 00:15:09.288 } 00:15:09.288 ] 00:15:09.288 }' 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.288 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.855 [2024-11-27 08:47:06.414137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:09.855 [2024-11-27 08:47:06.414377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.855 [2024-11-27 08:47:06.414429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:09.855 [2024-11-27 08:47:06.414447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.855 [2024-11-27 08:47:06.415100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.855 [2024-11-27 08:47:06.415137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:09.855 [2024-11-27 08:47:06.415267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:09.855 [2024-11-27 08:47:06.415303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:09.855 pt3 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.855 "name": "raid_bdev1", 00:15:09.855 "uuid": "ea89efa1-8270-45c6-b55c-add4549ac5b9", 00:15:09.855 "strip_size_kb": 0, 00:15:09.855 "state": "configuring", 00:15:09.855 "raid_level": "raid1", 00:15:09.855 "superblock": true, 00:15:09.855 "num_base_bdevs": 4, 00:15:09.855 "num_base_bdevs_discovered": 2, 00:15:09.855 "num_base_bdevs_operational": 3, 00:15:09.855 "base_bdevs_list": [ 00:15:09.855 { 00:15:09.855 "name": null, 00:15:09.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.855 "is_configured": false, 00:15:09.855 "data_offset": 2048, 00:15:09.855 "data_size": 63488 00:15:09.855 }, 00:15:09.855 { 00:15:09.855 "name": "pt2", 00:15:09.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.855 "is_configured": true, 00:15:09.855 "data_offset": 2048, 00:15:09.855 "data_size": 63488 00:15:09.855 }, 00:15:09.855 { 00:15:09.855 "name": "pt3", 00:15:09.855 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.855 "is_configured": true, 00:15:09.855 "data_offset": 2048, 00:15:09.855 "data_size": 63488 00:15:09.855 }, 00:15:09.855 { 00:15:09.855 "name": null, 00:15:09.855 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:09.855 "is_configured": false, 00:15:09.855 "data_offset": 2048, 00:15:09.855 "data_size": 63488 00:15:09.855 } 00:15:09.855 ] 00:15:09.855 }' 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.855 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.422 [2024-11-27 08:47:06.930300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:10.422 [2024-11-27 08:47:06.930402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.422 [2024-11-27 08:47:06.930442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:10.422 [2024-11-27 08:47:06.930459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.422 [2024-11-27 08:47:06.931116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.422 [2024-11-27 08:47:06.931144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:10.422 [2024-11-27 08:47:06.931269] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:10.422 [2024-11-27 08:47:06.931314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:10.422 [2024-11-27 08:47:06.931524] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:10.422 [2024-11-27 08:47:06.931549] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:10.422 [2024-11-27 08:47:06.931874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:10.422 [2024-11-27 08:47:06.932074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:10.422 [2024-11-27 08:47:06.932095] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:10.422 [2024-11-27 08:47:06.932273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.422 pt4 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.422 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.423 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.423 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.423 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.423 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.423 "name": "raid_bdev1", 00:15:10.423 "uuid": "ea89efa1-8270-45c6-b55c-add4549ac5b9", 00:15:10.423 "strip_size_kb": 0, 00:15:10.423 "state": "online", 00:15:10.423 "raid_level": "raid1", 00:15:10.423 "superblock": true, 00:15:10.423 "num_base_bdevs": 4, 00:15:10.423 "num_base_bdevs_discovered": 3, 00:15:10.423 "num_base_bdevs_operational": 3, 00:15:10.423 "base_bdevs_list": [ 00:15:10.423 { 00:15:10.423 "name": null, 00:15:10.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.423 "is_configured": false, 00:15:10.423 "data_offset": 2048, 00:15:10.423 "data_size": 63488 00:15:10.423 }, 00:15:10.423 { 00:15:10.423 "name": "pt2", 00:15:10.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.423 "is_configured": true, 00:15:10.423 "data_offset": 2048, 00:15:10.423 "data_size": 63488 00:15:10.423 }, 00:15:10.423 { 00:15:10.423 "name": "pt3", 00:15:10.423 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.423 "is_configured": true, 00:15:10.423 "data_offset": 2048, 00:15:10.423 "data_size": 63488 00:15:10.423 }, 00:15:10.423 { 00:15:10.423 "name": "pt4", 00:15:10.423 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.423 "is_configured": true, 00:15:10.423 "data_offset": 2048, 00:15:10.423 "data_size": 63488 00:15:10.423 } 00:15:10.423 ] 00:15:10.423 }' 00:15:10.423 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.423 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.681 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:10.681 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.681 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.940 [2024-11-27 08:47:07.438361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:10.940 [2024-11-27 08:47:07.438399] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:10.940 [2024-11-27 08:47:07.438527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.940 [2024-11-27 08:47:07.438639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.940 [2024-11-27 08:47:07.438662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.940 [2024-11-27 08:47:07.510352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:10.940 [2024-11-27 08:47:07.510585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.940 [2024-11-27 08:47:07.510625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:10.940 [2024-11-27 08:47:07.510647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.940 [2024-11-27 08:47:07.513824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.940 [2024-11-27 08:47:07.514008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:10.940 [2024-11-27 08:47:07.514145] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:10.940 [2024-11-27 08:47:07.514219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:10.940 [2024-11-27 08:47:07.514438] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:10.940 [2024-11-27 08:47:07.514463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:10.940 [2024-11-27 08:47:07.514486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:10.940 [2024-11-27 08:47:07.514573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:10.940 [2024-11-27 08:47:07.514782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:10.940 pt1 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.940 "name": "raid_bdev1", 00:15:10.940 "uuid": "ea89efa1-8270-45c6-b55c-add4549ac5b9", 00:15:10.940 "strip_size_kb": 0, 00:15:10.940 "state": "configuring", 00:15:10.940 "raid_level": "raid1", 00:15:10.940 "superblock": true, 00:15:10.940 "num_base_bdevs": 4, 00:15:10.940 "num_base_bdevs_discovered": 2, 00:15:10.940 "num_base_bdevs_operational": 3, 00:15:10.940 "base_bdevs_list": [ 00:15:10.940 { 00:15:10.940 "name": null, 00:15:10.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.940 "is_configured": false, 00:15:10.940 "data_offset": 2048, 00:15:10.940 "data_size": 63488 00:15:10.940 }, 00:15:10.940 { 00:15:10.940 "name": "pt2", 00:15:10.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.940 "is_configured": true, 00:15:10.940 "data_offset": 2048, 00:15:10.940 "data_size": 63488 00:15:10.940 }, 00:15:10.940 { 00:15:10.940 "name": "pt3", 00:15:10.940 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.940 "is_configured": true, 00:15:10.940 "data_offset": 2048, 00:15:10.940 "data_size": 63488 00:15:10.940 }, 00:15:10.940 { 00:15:10.940 "name": null, 00:15:10.940 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.940 "is_configured": false, 00:15:10.940 "data_offset": 2048, 00:15:10.940 "data_size": 63488 00:15:10.940 } 00:15:10.940 ] 00:15:10.940 }' 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.940 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.506 [2024-11-27 08:47:08.078739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:11.506 [2024-11-27 08:47:08.078829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.506 [2024-11-27 08:47:08.078868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:11.506 [2024-11-27 08:47:08.078886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.506 [2024-11-27 08:47:08.079542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.506 [2024-11-27 08:47:08.079570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:11.506 [2024-11-27 08:47:08.079698] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:11.506 [2024-11-27 08:47:08.079750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:11.506 [2024-11-27 08:47:08.079941] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:11.506 [2024-11-27 08:47:08.079958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:11.506 [2024-11-27 08:47:08.080293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:11.506 [2024-11-27 08:47:08.080503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:11.506 [2024-11-27 08:47:08.080637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:11.506 [2024-11-27 08:47:08.080846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.506 pt4 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.506 "name": "raid_bdev1", 00:15:11.506 "uuid": "ea89efa1-8270-45c6-b55c-add4549ac5b9", 00:15:11.506 "strip_size_kb": 0, 00:15:11.506 "state": "online", 00:15:11.506 "raid_level": "raid1", 00:15:11.506 "superblock": true, 00:15:11.506 "num_base_bdevs": 4, 00:15:11.506 "num_base_bdevs_discovered": 3, 00:15:11.506 "num_base_bdevs_operational": 3, 00:15:11.506 "base_bdevs_list": [ 00:15:11.506 { 00:15:11.506 "name": null, 00:15:11.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.506 "is_configured": false, 00:15:11.506 "data_offset": 2048, 00:15:11.506 "data_size": 63488 00:15:11.506 }, 00:15:11.506 { 00:15:11.506 "name": "pt2", 00:15:11.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.506 "is_configured": true, 00:15:11.506 "data_offset": 2048, 00:15:11.506 "data_size": 63488 00:15:11.506 }, 00:15:11.506 { 00:15:11.506 "name": "pt3", 00:15:11.506 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.506 "is_configured": true, 00:15:11.506 "data_offset": 2048, 00:15:11.506 "data_size": 63488 00:15:11.506 }, 00:15:11.506 { 00:15:11.506 "name": "pt4", 00:15:11.506 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:11.506 "is_configured": true, 00:15:11.506 "data_offset": 2048, 00:15:11.506 "data_size": 63488 00:15:11.506 } 00:15:11.506 ] 00:15:11.506 }' 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.506 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.071 [2024-11-27 08:47:08.683262] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ea89efa1-8270-45c6-b55c-add4549ac5b9 '!=' ea89efa1-8270-45c6-b55c-add4549ac5b9 ']' 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74836 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' -z 74836 ']' 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # kill -0 74836 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # uname 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 74836 00:15:12.071 killing process with pid 74836 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 74836' 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # kill 74836 00:15:12.071 [2024-11-27 08:47:08.760889] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.071 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@975 -- # wait 74836 00:15:12.071 [2024-11-27 08:47:08.761024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.071 [2024-11-27 08:47:08.761138] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.071 [2024-11-27 08:47:08.761159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:12.636 [2024-11-27 08:47:09.131038] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.569 08:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:13.569 00:15:13.569 real 0m9.259s 00:15:13.569 user 0m15.061s 00:15:13.569 sys 0m1.395s 00:15:13.569 ************************************ 00:15:13.569 END TEST raid_superblock_test 00:15:13.569 ************************************ 00:15:13.569 08:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:15:13.569 08:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.569 08:47:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:15:13.569 08:47:10 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:15:13.569 08:47:10 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:15:13.569 08:47:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.570 ************************************ 00:15:13.570 START TEST raid_read_error_test 00:15:13.570 ************************************ 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test raid1 4 read 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xPMIP7WG40 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75333 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75333 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # '[' -z 75333 ']' 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:15:13.570 08:47:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.827 [2024-11-27 08:47:10.422824] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:15:13.827 [2024-11-27 08:47:10.423011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75333 ] 00:15:14.084 [2024-11-27 08:47:10.613249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.084 [2024-11-27 08:47:10.773069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.343 [2024-11-27 08:47:11.026146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.343 [2024-11-27 08:47:11.026511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@865 -- # return 0 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.909 BaseBdev1_malloc 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.909 true 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.909 [2024-11-27 08:47:11.426853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:14.909 [2024-11-27 08:47:11.427115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.909 [2024-11-27 08:47:11.427172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:14.909 [2024-11-27 08:47:11.427203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.909 [2024-11-27 08:47:11.430169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.909 BaseBdev1 00:15:14.909 [2024-11-27 08:47:11.430380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.909 BaseBdev2_malloc 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.909 true 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.909 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.909 [2024-11-27 08:47:11.486528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:14.909 [2024-11-27 08:47:11.486601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.909 [2024-11-27 08:47:11.486627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:14.910 [2024-11-27 08:47:11.486644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.910 [2024-11-27 08:47:11.489582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.910 [2024-11-27 08:47:11.489631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:14.910 BaseBdev2 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.910 BaseBdev3_malloc 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.910 true 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.910 [2024-11-27 08:47:11.566163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:14.910 [2024-11-27 08:47:11.566235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.910 [2024-11-27 08:47:11.566263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:14.910 [2024-11-27 08:47:11.566293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.910 [2024-11-27 08:47:11.569284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.910 [2024-11-27 08:47:11.569331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:14.910 BaseBdev3 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.910 BaseBdev4_malloc 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.910 true 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.910 [2024-11-27 08:47:11.626780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:14.910 [2024-11-27 08:47:11.626985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.910 [2024-11-27 08:47:11.627025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:14.910 [2024-11-27 08:47:11.627045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.910 [2024-11-27 08:47:11.630103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.910 [2024-11-27 08:47:11.630267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:14.910 BaseBdev4 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.910 [2024-11-27 08:47:11.638985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.910 [2024-11-27 08:47:11.641631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.910 [2024-11-27 08:47:11.641760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.910 [2024-11-27 08:47:11.641869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:14.910 [2024-11-27 08:47:11.642187] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:14.910 [2024-11-27 08:47:11.642210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:14.910 [2024-11-27 08:47:11.642590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:14.910 [2024-11-27 08:47:11.642821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:14.910 [2024-11-27 08:47:11.642837] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:14.910 [2024-11-27 08:47:11.643083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.910 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.168 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.168 "name": "raid_bdev1", 00:15:15.168 "uuid": "07ac6dc6-4610-45a3-a510-091246c6e879", 00:15:15.168 "strip_size_kb": 0, 00:15:15.168 "state": "online", 00:15:15.168 "raid_level": "raid1", 00:15:15.168 "superblock": true, 00:15:15.168 "num_base_bdevs": 4, 00:15:15.168 "num_base_bdevs_discovered": 4, 00:15:15.168 "num_base_bdevs_operational": 4, 00:15:15.168 "base_bdevs_list": [ 00:15:15.168 { 00:15:15.168 "name": "BaseBdev1", 00:15:15.168 "uuid": "083f0747-5f2d-56f8-ace9-fb92fdc8e392", 00:15:15.168 "is_configured": true, 00:15:15.168 "data_offset": 2048, 00:15:15.168 "data_size": 63488 00:15:15.168 }, 00:15:15.168 { 00:15:15.168 "name": "BaseBdev2", 00:15:15.168 "uuid": "feb33eb6-311e-5eef-9fab-21bfb867f292", 00:15:15.168 "is_configured": true, 00:15:15.168 "data_offset": 2048, 00:15:15.168 "data_size": 63488 00:15:15.168 }, 00:15:15.168 { 00:15:15.168 "name": "BaseBdev3", 00:15:15.168 "uuid": "70354657-5e4a-5bd2-966f-1f6d97a4169f", 00:15:15.168 "is_configured": true, 00:15:15.168 "data_offset": 2048, 00:15:15.168 "data_size": 63488 00:15:15.168 }, 00:15:15.168 { 00:15:15.168 "name": "BaseBdev4", 00:15:15.168 "uuid": "de141e0e-6329-533d-9d1d-7269821dcc50", 00:15:15.168 "is_configured": true, 00:15:15.168 "data_offset": 2048, 00:15:15.168 "data_size": 63488 00:15:15.168 } 00:15:15.168 ] 00:15:15.168 }' 00:15:15.168 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.168 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.427 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:15.427 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:15.685 [2024-11-27 08:47:12.300801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.635 "name": "raid_bdev1", 00:15:16.635 "uuid": "07ac6dc6-4610-45a3-a510-091246c6e879", 00:15:16.635 "strip_size_kb": 0, 00:15:16.635 "state": "online", 00:15:16.635 "raid_level": "raid1", 00:15:16.635 "superblock": true, 00:15:16.635 "num_base_bdevs": 4, 00:15:16.635 "num_base_bdevs_discovered": 4, 00:15:16.635 "num_base_bdevs_operational": 4, 00:15:16.635 "base_bdevs_list": [ 00:15:16.635 { 00:15:16.635 "name": "BaseBdev1", 00:15:16.635 "uuid": "083f0747-5f2d-56f8-ace9-fb92fdc8e392", 00:15:16.635 "is_configured": true, 00:15:16.635 "data_offset": 2048, 00:15:16.635 "data_size": 63488 00:15:16.635 }, 00:15:16.635 { 00:15:16.635 "name": "BaseBdev2", 00:15:16.635 "uuid": "feb33eb6-311e-5eef-9fab-21bfb867f292", 00:15:16.635 "is_configured": true, 00:15:16.635 "data_offset": 2048, 00:15:16.635 "data_size": 63488 00:15:16.635 }, 00:15:16.635 { 00:15:16.635 "name": "BaseBdev3", 00:15:16.635 "uuid": "70354657-5e4a-5bd2-966f-1f6d97a4169f", 00:15:16.635 "is_configured": true, 00:15:16.635 "data_offset": 2048, 00:15:16.635 "data_size": 63488 00:15:16.635 }, 00:15:16.635 { 00:15:16.635 "name": "BaseBdev4", 00:15:16.635 "uuid": "de141e0e-6329-533d-9d1d-7269821dcc50", 00:15:16.635 "is_configured": true, 00:15:16.635 "data_offset": 2048, 00:15:16.635 "data_size": 63488 00:15:16.635 } 00:15:16.635 ] 00:15:16.635 }' 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.635 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.214 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:17.214 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.214 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.214 [2024-11-27 08:47:13.693755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.214 [2024-11-27 08:47:13.693798] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.214 [2024-11-27 08:47:13.697146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.214 [2024-11-27 08:47:13.697229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.214 [2024-11-27 08:47:13.697423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.214 [2024-11-27 08:47:13.697447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:17.214 { 00:15:17.214 "results": [ 00:15:17.214 { 00:15:17.214 "job": "raid_bdev1", 00:15:17.214 "core_mask": "0x1", 00:15:17.214 "workload": "randrw", 00:15:17.214 "percentage": 50, 00:15:17.214 "status": "finished", 00:15:17.214 "queue_depth": 1, 00:15:17.214 "io_size": 131072, 00:15:17.214 "runtime": 1.390016, 00:15:17.214 "iops": 6630.139509185506, 00:15:17.214 "mibps": 828.7674386481882, 00:15:17.214 "io_failed": 0, 00:15:17.214 "io_timeout": 0, 00:15:17.214 "avg_latency_us": 146.4256186868687, 00:15:17.214 "min_latency_us": 42.82181818181818, 00:15:17.214 "max_latency_us": 2323.549090909091 00:15:17.214 } 00:15:17.214 ], 00:15:17.214 "core_count": 1 00:15:17.214 } 00:15:17.214 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.214 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75333 00:15:17.214 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' -z 75333 ']' 00:15:17.214 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # kill -0 75333 00:15:17.214 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # uname 00:15:17.214 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:15:17.214 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 75333 00:15:17.214 killing process with pid 75333 00:15:17.214 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:15:17.214 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:15:17.214 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 75333' 00:15:17.214 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # kill 75333 00:15:17.214 [2024-11-27 08:47:13.734420] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.214 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@975 -- # wait 75333 00:15:17.472 [2024-11-27 08:47:14.045130] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.847 08:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xPMIP7WG40 00:15:18.847 08:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:18.847 08:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:18.847 08:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:18.847 08:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:18.847 08:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:18.847 08:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:18.847 08:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:18.847 00:15:18.847 real 0m4.937s 00:15:18.847 user 0m5.987s 00:15:18.847 sys 0m0.647s 00:15:18.847 08:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:15:18.847 08:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.847 ************************************ 00:15:18.847 END TEST raid_read_error_test 00:15:18.847 ************************************ 00:15:18.847 08:47:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:15:18.847 08:47:15 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:15:18.847 08:47:15 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:15:18.847 08:47:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.847 ************************************ 00:15:18.847 START TEST raid_write_error_test 00:15:18.847 ************************************ 00:15:18.847 08:47:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # raid_io_error_test raid1 4 write 00:15:18.847 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:18.847 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:18.847 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:18.847 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:18.847 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:18.847 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KSlHXVOe33 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75480 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75480 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # '[' -z 75480 ']' 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:15:18.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:15:18.848 08:47:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.848 [2024-11-27 08:47:15.414100] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:15:18.848 [2024-11-27 08:47:15.414308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75480 ] 00:15:18.848 [2024-11-27 08:47:15.594282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.106 [2024-11-27 08:47:15.743766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.364 [2024-11-27 08:47:15.970649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.364 [2024-11-27 08:47:15.970691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@865 -- # return 0 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.931 BaseBdev1_malloc 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.931 true 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.931 [2024-11-27 08:47:16.525115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:19.931 [2024-11-27 08:47:16.525195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.931 [2024-11-27 08:47:16.525225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:19.931 [2024-11-27 08:47:16.525244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.931 [2024-11-27 08:47:16.528324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.931 [2024-11-27 08:47:16.528387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:19.931 BaseBdev1 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.931 BaseBdev2_malloc 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.931 true 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.931 [2024-11-27 08:47:16.591991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:19.931 [2024-11-27 08:47:16.592065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.931 [2024-11-27 08:47:16.592093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:19.931 [2024-11-27 08:47:16.592112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.931 [2024-11-27 08:47:16.595154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.931 [2024-11-27 08:47:16.595215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:19.931 BaseBdev2 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.931 BaseBdev3_malloc 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.931 true 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.931 [2024-11-27 08:47:16.674174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:19.931 [2024-11-27 08:47:16.674252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.931 [2024-11-27 08:47:16.674292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:19.931 [2024-11-27 08:47:16.674314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.931 [2024-11-27 08:47:16.677394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.931 [2024-11-27 08:47:16.677443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:19.931 BaseBdev3 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:19.931 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:19.932 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.932 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.190 BaseBdev4_malloc 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.190 true 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.190 [2024-11-27 08:47:16.737896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:20.190 [2024-11-27 08:47:16.737973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.190 [2024-11-27 08:47:16.738000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:20.190 [2024-11-27 08:47:16.738019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.190 [2024-11-27 08:47:16.740926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.190 [2024-11-27 08:47:16.740979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:20.190 BaseBdev4 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.190 [2024-11-27 08:47:16.745977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.190 [2024-11-27 08:47:16.748706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.190 [2024-11-27 08:47:16.748952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.190 [2024-11-27 08:47:16.749173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:20.190 [2024-11-27 08:47:16.749613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:20.190 [2024-11-27 08:47:16.749743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:20.190 [2024-11-27 08:47:16.750111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:20.190 [2024-11-27 08:47:16.750499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:20.190 [2024-11-27 08:47:16.750622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:20.190 [2024-11-27 08:47:16.751065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.190 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.190 "name": "raid_bdev1", 00:15:20.190 "uuid": "db74f2be-73fc-4263-b1a0-546293825f8a", 00:15:20.190 "strip_size_kb": 0, 00:15:20.190 "state": "online", 00:15:20.190 "raid_level": "raid1", 00:15:20.190 "superblock": true, 00:15:20.190 "num_base_bdevs": 4, 00:15:20.190 "num_base_bdevs_discovered": 4, 00:15:20.190 "num_base_bdevs_operational": 4, 00:15:20.190 "base_bdevs_list": [ 00:15:20.190 { 00:15:20.190 "name": "BaseBdev1", 00:15:20.190 "uuid": "b350c7f9-a98b-53e3-bbce-8fe339392b45", 00:15:20.190 "is_configured": true, 00:15:20.190 "data_offset": 2048, 00:15:20.190 "data_size": 63488 00:15:20.190 }, 00:15:20.190 { 00:15:20.190 "name": "BaseBdev2", 00:15:20.190 "uuid": "dafbf26a-b66b-5cb3-8ccc-778e2eac2e76", 00:15:20.190 "is_configured": true, 00:15:20.190 "data_offset": 2048, 00:15:20.191 "data_size": 63488 00:15:20.191 }, 00:15:20.191 { 00:15:20.191 "name": "BaseBdev3", 00:15:20.191 "uuid": "2bd48012-68bf-55e0-8e1b-cdeb45624b66", 00:15:20.191 "is_configured": true, 00:15:20.191 "data_offset": 2048, 00:15:20.191 "data_size": 63488 00:15:20.191 }, 00:15:20.191 { 00:15:20.191 "name": "BaseBdev4", 00:15:20.191 "uuid": "693716f7-543f-5dd4-9524-5d3e3cc2a79a", 00:15:20.191 "is_configured": true, 00:15:20.191 "data_offset": 2048, 00:15:20.191 "data_size": 63488 00:15:20.191 } 00:15:20.191 ] 00:15:20.191 }' 00:15:20.191 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.191 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.840 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:20.840 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:20.840 [2024-11-27 08:47:17.400742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.775 [2024-11-27 08:47:18.294320] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:15:21.775 [2024-11-27 08:47:18.294406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.775 [2024-11-27 08:47:18.294704] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.775 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.775 "name": "raid_bdev1", 00:15:21.775 "uuid": "db74f2be-73fc-4263-b1a0-546293825f8a", 00:15:21.775 "strip_size_kb": 0, 00:15:21.775 "state": "online", 00:15:21.776 "raid_level": "raid1", 00:15:21.776 "superblock": true, 00:15:21.776 "num_base_bdevs": 4, 00:15:21.776 "num_base_bdevs_discovered": 3, 00:15:21.776 "num_base_bdevs_operational": 3, 00:15:21.776 "base_bdevs_list": [ 00:15:21.776 { 00:15:21.776 "name": null, 00:15:21.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.776 "is_configured": false, 00:15:21.776 "data_offset": 0, 00:15:21.776 "data_size": 63488 00:15:21.776 }, 00:15:21.776 { 00:15:21.776 "name": "BaseBdev2", 00:15:21.776 "uuid": "dafbf26a-b66b-5cb3-8ccc-778e2eac2e76", 00:15:21.776 "is_configured": true, 00:15:21.776 "data_offset": 2048, 00:15:21.776 "data_size": 63488 00:15:21.776 }, 00:15:21.776 { 00:15:21.776 "name": "BaseBdev3", 00:15:21.776 "uuid": "2bd48012-68bf-55e0-8e1b-cdeb45624b66", 00:15:21.776 "is_configured": true, 00:15:21.776 "data_offset": 2048, 00:15:21.776 "data_size": 63488 00:15:21.776 }, 00:15:21.776 { 00:15:21.776 "name": "BaseBdev4", 00:15:21.776 "uuid": "693716f7-543f-5dd4-9524-5d3e3cc2a79a", 00:15:21.776 "is_configured": true, 00:15:21.776 "data_offset": 2048, 00:15:21.776 "data_size": 63488 00:15:21.776 } 00:15:21.776 ] 00:15:21.776 }' 00:15:21.776 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.776 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.344 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:22.344 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.344 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.344 [2024-11-27 08:47:18.801930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.344 [2024-11-27 08:47:18.801970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.344 [2024-11-27 08:47:18.805300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.344 [2024-11-27 08:47:18.805390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.344 [2024-11-27 08:47:18.805540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.344 [2024-11-27 08:47:18.805560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:22.344 { 00:15:22.344 "results": [ 00:15:22.344 { 00:15:22.344 "job": "raid_bdev1", 00:15:22.344 "core_mask": "0x1", 00:15:22.344 "workload": "randrw", 00:15:22.344 "percentage": 50, 00:15:22.344 "status": "finished", 00:15:22.344 "queue_depth": 1, 00:15:22.344 "io_size": 131072, 00:15:22.344 "runtime": 1.398366, 00:15:22.344 "iops": 7332.844191005788, 00:15:22.344 "mibps": 916.6055238757235, 00:15:22.344 "io_failed": 0, 00:15:22.344 "io_timeout": 0, 00:15:22.344 "avg_latency_us": 132.2036845931521, 00:15:22.344 "min_latency_us": 41.192727272727275, 00:15:22.344 "max_latency_us": 1980.9745454545455 00:15:22.344 } 00:15:22.344 ], 00:15:22.344 "core_count": 1 00:15:22.344 } 00:15:22.344 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.344 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75480 00:15:22.344 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' -z 75480 ']' 00:15:22.344 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # kill -0 75480 00:15:22.344 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # uname 00:15:22.344 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:15:22.344 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 75480 00:15:22.344 killing process with pid 75480 00:15:22.344 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:15:22.344 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:15:22.344 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 75480' 00:15:22.344 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # kill 75480 00:15:22.344 [2024-11-27 08:47:18.842718] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.344 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@975 -- # wait 75480 00:15:22.601 [2024-11-27 08:47:19.155674] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.974 08:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:23.974 08:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KSlHXVOe33 00:15:23.974 08:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:23.974 08:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:23.974 08:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:23.974 08:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:23.974 08:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:23.974 08:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:23.974 ************************************ 00:15:23.974 END TEST raid_write_error_test 00:15:23.974 ************************************ 00:15:23.974 00:15:23.974 real 0m5.034s 00:15:23.974 user 0m6.167s 00:15:23.974 sys 0m0.649s 00:15:23.974 08:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:15:23.974 08:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.974 08:47:20 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:15:23.974 08:47:20 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:23.974 08:47:20 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:15:23.974 08:47:20 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:15:23.974 08:47:20 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:15:23.974 08:47:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:23.974 ************************************ 00:15:23.974 START TEST raid_rebuild_test 00:15:23.974 ************************************ 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # raid_rebuild_test raid1 2 false false true 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75624 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75624 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@832 -- # '[' -z 75624 ']' 00:15:23.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:15:23.974 08:47:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.974 [2024-11-27 08:47:20.500127] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:15:23.974 [2024-11-27 08:47:20.500321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75624 ] 00:15:23.974 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:23.974 Zero copy mechanism will not be used. 00:15:23.974 [2024-11-27 08:47:20.684775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.233 [2024-11-27 08:47:20.831626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.492 [2024-11-27 08:47:21.054769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.493 [2024-11-27 08:47:21.055113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # return 0 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.060 BaseBdev1_malloc 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.060 [2024-11-27 08:47:21.575680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:25.060 [2024-11-27 08:47:21.575787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.060 [2024-11-27 08:47:21.575821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:25.060 [2024-11-27 08:47:21.575839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.060 [2024-11-27 08:47:21.578786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.060 [2024-11-27 08:47:21.578977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:25.060 BaseBdev1 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.060 BaseBdev2_malloc 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.060 [2024-11-27 08:47:21.632115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:25.060 [2024-11-27 08:47:21.632202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.060 [2024-11-27 08:47:21.632233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:25.060 [2024-11-27 08:47:21.632255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.060 [2024-11-27 08:47:21.635263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.060 [2024-11-27 08:47:21.635316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:25.060 BaseBdev2 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.060 spare_malloc 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.060 spare_delay 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.060 [2024-11-27 08:47:21.712292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:25.060 [2024-11-27 08:47:21.712524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.060 [2024-11-27 08:47:21.712568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:25.060 [2024-11-27 08:47:21.712589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.060 [2024-11-27 08:47:21.715589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.060 [2024-11-27 08:47:21.715763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:25.060 spare 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.060 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.060 [2024-11-27 08:47:21.720451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.060 [2024-11-27 08:47:21.723048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.060 [2024-11-27 08:47:21.723177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:25.060 [2024-11-27 08:47:21.723201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:25.061 [2024-11-27 08:47:21.723568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:25.061 [2024-11-27 08:47:21.723795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:25.061 [2024-11-27 08:47:21.723823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:25.061 [2024-11-27 08:47:21.724016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.061 "name": "raid_bdev1", 00:15:25.061 "uuid": "57054b57-de7c-47c1-a837-55bfed0eeaa7", 00:15:25.061 "strip_size_kb": 0, 00:15:25.061 "state": "online", 00:15:25.061 "raid_level": "raid1", 00:15:25.061 "superblock": false, 00:15:25.061 "num_base_bdevs": 2, 00:15:25.061 "num_base_bdevs_discovered": 2, 00:15:25.061 "num_base_bdevs_operational": 2, 00:15:25.061 "base_bdevs_list": [ 00:15:25.061 { 00:15:25.061 "name": "BaseBdev1", 00:15:25.061 "uuid": "a16747fd-2e27-5b26-9a83-c3369c898bb8", 00:15:25.061 "is_configured": true, 00:15:25.061 "data_offset": 0, 00:15:25.061 "data_size": 65536 00:15:25.061 }, 00:15:25.061 { 00:15:25.061 "name": "BaseBdev2", 00:15:25.061 "uuid": "497a2542-2f36-58a3-9285-b81c89b23a38", 00:15:25.061 "is_configured": true, 00:15:25.061 "data_offset": 0, 00:15:25.061 "data_size": 65536 00:15:25.061 } 00:15:25.061 ] 00:15:25.061 }' 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.061 08:47:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.629 [2024-11-27 08:47:22.245007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:25.629 08:47:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:25.888 [2024-11-27 08:47:22.580832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:25.888 /dev/nbd0 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local i 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # break 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.888 1+0 records in 00:15:25.888 1+0 records out 00:15:25.888 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605388 s, 6.8 MB/s 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # size=4096 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # return 0 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:25.888 08:47:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:32.479 65536+0 records in 00:15:32.479 65536+0 records out 00:15:32.479 33554432 bytes (34 MB, 32 MiB) copied, 6.05582 s, 5.5 MB/s 00:15:32.479 08:47:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:32.479 08:47:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.479 08:47:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:32.479 08:47:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:32.479 08:47:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:32.479 08:47:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.479 08:47:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:32.479 [2024-11-27 08:47:29.067987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.479 [2024-11-27 08:47:29.080084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.479 "name": "raid_bdev1", 00:15:32.479 "uuid": "57054b57-de7c-47c1-a837-55bfed0eeaa7", 00:15:32.479 "strip_size_kb": 0, 00:15:32.479 "state": "online", 00:15:32.479 "raid_level": "raid1", 00:15:32.479 "superblock": false, 00:15:32.479 "num_base_bdevs": 2, 00:15:32.479 "num_base_bdevs_discovered": 1, 00:15:32.479 "num_base_bdevs_operational": 1, 00:15:32.479 "base_bdevs_list": [ 00:15:32.479 { 00:15:32.479 "name": null, 00:15:32.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.479 "is_configured": false, 00:15:32.479 "data_offset": 0, 00:15:32.479 "data_size": 65536 00:15:32.479 }, 00:15:32.479 { 00:15:32.479 "name": "BaseBdev2", 00:15:32.479 "uuid": "497a2542-2f36-58a3-9285-b81c89b23a38", 00:15:32.479 "is_configured": true, 00:15:32.479 "data_offset": 0, 00:15:32.479 "data_size": 65536 00:15:32.479 } 00:15:32.479 ] 00:15:32.479 }' 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.479 08:47:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.045 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:33.045 08:47:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.045 08:47:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.045 [2024-11-27 08:47:29.588286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.045 [2024-11-27 08:47:29.606207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:15:33.045 08:47:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.045 08:47:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:33.045 [2024-11-27 08:47:29.608982] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:33.980 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.980 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.980 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.980 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.980 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.980 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.980 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.980 08:47:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.980 08:47:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.980 08:47:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.980 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.980 "name": "raid_bdev1", 00:15:33.980 "uuid": "57054b57-de7c-47c1-a837-55bfed0eeaa7", 00:15:33.980 "strip_size_kb": 0, 00:15:33.980 "state": "online", 00:15:33.980 "raid_level": "raid1", 00:15:33.980 "superblock": false, 00:15:33.980 "num_base_bdevs": 2, 00:15:33.980 "num_base_bdevs_discovered": 2, 00:15:33.980 "num_base_bdevs_operational": 2, 00:15:33.980 "process": { 00:15:33.980 "type": "rebuild", 00:15:33.980 "target": "spare", 00:15:33.980 "progress": { 00:15:33.980 "blocks": 18432, 00:15:33.980 "percent": 28 00:15:33.980 } 00:15:33.980 }, 00:15:33.980 "base_bdevs_list": [ 00:15:33.980 { 00:15:33.980 "name": "spare", 00:15:33.980 "uuid": "cc6b716a-e84f-5940-9568-e95af1bd491f", 00:15:33.980 "is_configured": true, 00:15:33.980 "data_offset": 0, 00:15:33.980 "data_size": 65536 00:15:33.980 }, 00:15:33.980 { 00:15:33.980 "name": "BaseBdev2", 00:15:33.980 "uuid": "497a2542-2f36-58a3-9285-b81c89b23a38", 00:15:33.980 "is_configured": true, 00:15:33.980 "data_offset": 0, 00:15:33.980 "data_size": 65536 00:15:33.980 } 00:15:33.980 ] 00:15:33.980 }' 00:15:33.980 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.238 [2024-11-27 08:47:30.794891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.238 [2024-11-27 08:47:30.820968] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:34.238 [2024-11-27 08:47:30.821068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.238 [2024-11-27 08:47:30.821104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.238 [2024-11-27 08:47:30.821121] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.238 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.238 "name": "raid_bdev1", 00:15:34.238 "uuid": "57054b57-de7c-47c1-a837-55bfed0eeaa7", 00:15:34.238 "strip_size_kb": 0, 00:15:34.238 "state": "online", 00:15:34.238 "raid_level": "raid1", 00:15:34.238 "superblock": false, 00:15:34.238 "num_base_bdevs": 2, 00:15:34.238 "num_base_bdevs_discovered": 1, 00:15:34.238 "num_base_bdevs_operational": 1, 00:15:34.238 "base_bdevs_list": [ 00:15:34.238 { 00:15:34.239 "name": null, 00:15:34.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.239 "is_configured": false, 00:15:34.239 "data_offset": 0, 00:15:34.239 "data_size": 65536 00:15:34.239 }, 00:15:34.239 { 00:15:34.239 "name": "BaseBdev2", 00:15:34.239 "uuid": "497a2542-2f36-58a3-9285-b81c89b23a38", 00:15:34.239 "is_configured": true, 00:15:34.239 "data_offset": 0, 00:15:34.239 "data_size": 65536 00:15:34.239 } 00:15:34.239 ] 00:15:34.239 }' 00:15:34.239 08:47:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.239 08:47:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.806 "name": "raid_bdev1", 00:15:34.806 "uuid": "57054b57-de7c-47c1-a837-55bfed0eeaa7", 00:15:34.806 "strip_size_kb": 0, 00:15:34.806 "state": "online", 00:15:34.806 "raid_level": "raid1", 00:15:34.806 "superblock": false, 00:15:34.806 "num_base_bdevs": 2, 00:15:34.806 "num_base_bdevs_discovered": 1, 00:15:34.806 "num_base_bdevs_operational": 1, 00:15:34.806 "base_bdevs_list": [ 00:15:34.806 { 00:15:34.806 "name": null, 00:15:34.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.806 "is_configured": false, 00:15:34.806 "data_offset": 0, 00:15:34.806 "data_size": 65536 00:15:34.806 }, 00:15:34.806 { 00:15:34.806 "name": "BaseBdev2", 00:15:34.806 "uuid": "497a2542-2f36-58a3-9285-b81c89b23a38", 00:15:34.806 "is_configured": true, 00:15:34.806 "data_offset": 0, 00:15:34.806 "data_size": 65536 00:15:34.806 } 00:15:34.806 ] 00:15:34.806 }' 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.806 [2024-11-27 08:47:31.503296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.806 [2024-11-27 08:47:31.519746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.806 08:47:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:34.806 [2024-11-27 08:47:31.522415] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.182 "name": "raid_bdev1", 00:15:36.182 "uuid": "57054b57-de7c-47c1-a837-55bfed0eeaa7", 00:15:36.182 "strip_size_kb": 0, 00:15:36.182 "state": "online", 00:15:36.182 "raid_level": "raid1", 00:15:36.182 "superblock": false, 00:15:36.182 "num_base_bdevs": 2, 00:15:36.182 "num_base_bdevs_discovered": 2, 00:15:36.182 "num_base_bdevs_operational": 2, 00:15:36.182 "process": { 00:15:36.182 "type": "rebuild", 00:15:36.182 "target": "spare", 00:15:36.182 "progress": { 00:15:36.182 "blocks": 20480, 00:15:36.182 "percent": 31 00:15:36.182 } 00:15:36.182 }, 00:15:36.182 "base_bdevs_list": [ 00:15:36.182 { 00:15:36.182 "name": "spare", 00:15:36.182 "uuid": "cc6b716a-e84f-5940-9568-e95af1bd491f", 00:15:36.182 "is_configured": true, 00:15:36.182 "data_offset": 0, 00:15:36.182 "data_size": 65536 00:15:36.182 }, 00:15:36.182 { 00:15:36.182 "name": "BaseBdev2", 00:15:36.182 "uuid": "497a2542-2f36-58a3-9285-b81c89b23a38", 00:15:36.182 "is_configured": true, 00:15:36.182 "data_offset": 0, 00:15:36.182 "data_size": 65536 00:15:36.182 } 00:15:36.182 ] 00:15:36.182 }' 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=406 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.182 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.182 "name": "raid_bdev1", 00:15:36.182 "uuid": "57054b57-de7c-47c1-a837-55bfed0eeaa7", 00:15:36.182 "strip_size_kb": 0, 00:15:36.182 "state": "online", 00:15:36.182 "raid_level": "raid1", 00:15:36.182 "superblock": false, 00:15:36.182 "num_base_bdevs": 2, 00:15:36.182 "num_base_bdevs_discovered": 2, 00:15:36.182 "num_base_bdevs_operational": 2, 00:15:36.182 "process": { 00:15:36.182 "type": "rebuild", 00:15:36.183 "target": "spare", 00:15:36.183 "progress": { 00:15:36.183 "blocks": 22528, 00:15:36.183 "percent": 34 00:15:36.183 } 00:15:36.183 }, 00:15:36.183 "base_bdevs_list": [ 00:15:36.183 { 00:15:36.183 "name": "spare", 00:15:36.183 "uuid": "cc6b716a-e84f-5940-9568-e95af1bd491f", 00:15:36.183 "is_configured": true, 00:15:36.183 "data_offset": 0, 00:15:36.183 "data_size": 65536 00:15:36.183 }, 00:15:36.183 { 00:15:36.183 "name": "BaseBdev2", 00:15:36.183 "uuid": "497a2542-2f36-58a3-9285-b81c89b23a38", 00:15:36.183 "is_configured": true, 00:15:36.183 "data_offset": 0, 00:15:36.183 "data_size": 65536 00:15:36.183 } 00:15:36.183 ] 00:15:36.183 }' 00:15:36.183 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.183 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.183 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.183 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.183 08:47:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.115 08:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.115 08:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.115 08:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.115 08:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.115 08:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.115 08:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.115 08:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.115 08:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.115 08:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.115 08:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.115 08:47:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.373 08:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.373 "name": "raid_bdev1", 00:15:37.373 "uuid": "57054b57-de7c-47c1-a837-55bfed0eeaa7", 00:15:37.373 "strip_size_kb": 0, 00:15:37.373 "state": "online", 00:15:37.373 "raid_level": "raid1", 00:15:37.373 "superblock": false, 00:15:37.373 "num_base_bdevs": 2, 00:15:37.373 "num_base_bdevs_discovered": 2, 00:15:37.373 "num_base_bdevs_operational": 2, 00:15:37.373 "process": { 00:15:37.373 "type": "rebuild", 00:15:37.373 "target": "spare", 00:15:37.373 "progress": { 00:15:37.373 "blocks": 45056, 00:15:37.373 "percent": 68 00:15:37.373 } 00:15:37.373 }, 00:15:37.373 "base_bdevs_list": [ 00:15:37.373 { 00:15:37.373 "name": "spare", 00:15:37.373 "uuid": "cc6b716a-e84f-5940-9568-e95af1bd491f", 00:15:37.373 "is_configured": true, 00:15:37.373 "data_offset": 0, 00:15:37.373 "data_size": 65536 00:15:37.373 }, 00:15:37.373 { 00:15:37.373 "name": "BaseBdev2", 00:15:37.373 "uuid": "497a2542-2f36-58a3-9285-b81c89b23a38", 00:15:37.373 "is_configured": true, 00:15:37.373 "data_offset": 0, 00:15:37.373 "data_size": 65536 00:15:37.373 } 00:15:37.373 ] 00:15:37.373 }' 00:15:37.373 08:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.373 08:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.373 08:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.373 08:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.373 08:47:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:38.313 [2024-11-27 08:47:34.751521] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:38.313 [2024-11-27 08:47:34.751883] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:38.313 [2024-11-27 08:47:34.751980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.313 08:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.313 08:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.313 08:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.313 08:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.313 08:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.313 08:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.313 08:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.313 08:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.313 08:47:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.313 08:47:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.313 08:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.313 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.313 "name": "raid_bdev1", 00:15:38.313 "uuid": "57054b57-de7c-47c1-a837-55bfed0eeaa7", 00:15:38.313 "strip_size_kb": 0, 00:15:38.313 "state": "online", 00:15:38.313 "raid_level": "raid1", 00:15:38.313 "superblock": false, 00:15:38.313 "num_base_bdevs": 2, 00:15:38.313 "num_base_bdevs_discovered": 2, 00:15:38.313 "num_base_bdevs_operational": 2, 00:15:38.313 "base_bdevs_list": [ 00:15:38.313 { 00:15:38.313 "name": "spare", 00:15:38.313 "uuid": "cc6b716a-e84f-5940-9568-e95af1bd491f", 00:15:38.313 "is_configured": true, 00:15:38.313 "data_offset": 0, 00:15:38.313 "data_size": 65536 00:15:38.313 }, 00:15:38.313 { 00:15:38.313 "name": "BaseBdev2", 00:15:38.313 "uuid": "497a2542-2f36-58a3-9285-b81c89b23a38", 00:15:38.313 "is_configured": true, 00:15:38.313 "data_offset": 0, 00:15:38.313 "data_size": 65536 00:15:38.313 } 00:15:38.313 ] 00:15:38.313 }' 00:15:38.313 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.571 "name": "raid_bdev1", 00:15:38.571 "uuid": "57054b57-de7c-47c1-a837-55bfed0eeaa7", 00:15:38.571 "strip_size_kb": 0, 00:15:38.571 "state": "online", 00:15:38.571 "raid_level": "raid1", 00:15:38.571 "superblock": false, 00:15:38.571 "num_base_bdevs": 2, 00:15:38.571 "num_base_bdevs_discovered": 2, 00:15:38.571 "num_base_bdevs_operational": 2, 00:15:38.571 "base_bdevs_list": [ 00:15:38.571 { 00:15:38.571 "name": "spare", 00:15:38.571 "uuid": "cc6b716a-e84f-5940-9568-e95af1bd491f", 00:15:38.571 "is_configured": true, 00:15:38.571 "data_offset": 0, 00:15:38.571 "data_size": 65536 00:15:38.571 }, 00:15:38.571 { 00:15:38.571 "name": "BaseBdev2", 00:15:38.571 "uuid": "497a2542-2f36-58a3-9285-b81c89b23a38", 00:15:38.571 "is_configured": true, 00:15:38.571 "data_offset": 0, 00:15:38.571 "data_size": 65536 00:15:38.571 } 00:15:38.571 ] 00:15:38.571 }' 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:38.571 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.830 "name": "raid_bdev1", 00:15:38.830 "uuid": "57054b57-de7c-47c1-a837-55bfed0eeaa7", 00:15:38.830 "strip_size_kb": 0, 00:15:38.830 "state": "online", 00:15:38.830 "raid_level": "raid1", 00:15:38.830 "superblock": false, 00:15:38.830 "num_base_bdevs": 2, 00:15:38.830 "num_base_bdevs_discovered": 2, 00:15:38.830 "num_base_bdevs_operational": 2, 00:15:38.830 "base_bdevs_list": [ 00:15:38.830 { 00:15:38.830 "name": "spare", 00:15:38.830 "uuid": "cc6b716a-e84f-5940-9568-e95af1bd491f", 00:15:38.830 "is_configured": true, 00:15:38.830 "data_offset": 0, 00:15:38.830 "data_size": 65536 00:15:38.830 }, 00:15:38.830 { 00:15:38.830 "name": "BaseBdev2", 00:15:38.830 "uuid": "497a2542-2f36-58a3-9285-b81c89b23a38", 00:15:38.830 "is_configured": true, 00:15:38.830 "data_offset": 0, 00:15:38.830 "data_size": 65536 00:15:38.830 } 00:15:38.830 ] 00:15:38.830 }' 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.830 08:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.088 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:39.088 08:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.088 08:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.088 [2024-11-27 08:47:35.845212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.088 [2024-11-27 08:47:35.845255] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.088 [2024-11-27 08:47:35.845404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.088 [2024-11-27 08:47:35.845515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.088 [2024-11-27 08:47:35.845534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.347 08:47:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:39.605 /dev/nbd0 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local i 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # break 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.605 1+0 records in 00:15:39.605 1+0 records out 00:15:39.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223453 s, 18.3 MB/s 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # size=4096 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # return 0 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.605 08:47:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:39.864 /dev/nbd1 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local i 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # break 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.864 1+0 records in 00:15:39.864 1+0 records out 00:15:39.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363843 s, 11.3 MB/s 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # size=4096 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # return 0 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.864 08:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:40.123 08:47:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:40.123 08:47:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.123 08:47:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:40.123 08:47:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.123 08:47:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:40.123 08:47:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.123 08:47:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:40.382 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.382 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.382 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.382 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.382 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.382 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.382 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:40.382 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.382 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.382 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:40.640 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:40.640 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:40.640 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:40.640 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.640 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.640 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:40.640 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:40.640 08:47:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.640 08:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:40.640 08:47:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75624 00:15:40.640 08:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # '[' -z 75624 ']' 00:15:40.640 08:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # kill -0 75624 00:15:40.640 08:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # uname 00:15:40.640 08:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:15:40.640 08:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 75624 00:15:40.899 killing process with pid 75624 00:15:40.899 Received shutdown signal, test time was about 60.000000 seconds 00:15:40.899 00:15:40.899 Latency(us) 00:15:40.899 [2024-11-27T08:47:37.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.899 [2024-11-27T08:47:37.659Z] =================================================================================================================== 00:15:40.899 [2024-11-27T08:47:37.659Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:40.899 08:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:15:40.899 08:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:15:40.899 08:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 75624' 00:15:40.899 08:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # kill 75624 00:15:40.899 [2024-11-27 08:47:37.412197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:40.899 08:47:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@975 -- # wait 75624 00:15:41.157 [2024-11-27 08:47:37.690758] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:42.093 ************************************ 00:15:42.093 END TEST raid_rebuild_test 00:15:42.093 ************************************ 00:15:42.093 08:47:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:42.093 00:15:42.093 real 0m18.399s 00:15:42.093 user 0m21.216s 00:15:42.093 sys 0m3.527s 00:15:42.093 08:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:15:42.093 08:47:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.093 08:47:38 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:15:42.093 08:47:38 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:15:42.093 08:47:38 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:15:42.094 08:47:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:42.094 ************************************ 00:15:42.094 START TEST raid_rebuild_test_sb 00:15:42.094 ************************************ 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # raid_rebuild_test raid1 2 true false true 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76075 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:42.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76075 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@832 -- # '[' -z 76075 ']' 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local max_retries=100 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@841 -- # xtrace_disable 00:15:42.094 08:47:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.353 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:42.353 Zero copy mechanism will not be used. 00:15:42.353 [2024-11-27 08:47:38.952898] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:15:42.353 [2024-11-27 08:47:38.953095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76075 ] 00:15:42.611 [2024-11-27 08:47:39.143432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.611 [2024-11-27 08:47:39.309035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.869 [2024-11-27 08:47:39.531098] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.869 [2024-11-27 08:47:39.531157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.437 08:47:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:15:43.437 08:47:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # return 0 00:15:43.437 08:47:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:43.437 08:47:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:43.437 08:47:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.437 08:47:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.437 BaseBdev1_malloc 00:15:43.437 08:47:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.437 08:47:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:43.437 08:47:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.437 08:47:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.437 [2024-11-27 08:47:39.952863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:43.438 [2024-11-27 08:47:39.953078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.438 [2024-11-27 08:47:39.953127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:43.438 [2024-11-27 08:47:39.953150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.438 [2024-11-27 08:47:39.956076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.438 [2024-11-27 08:47:39.956251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:43.438 BaseBdev1 00:15:43.438 08:47:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.438 08:47:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:43.438 08:47:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:43.438 08:47:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.438 08:47:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.438 BaseBdev2_malloc 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.438 [2024-11-27 08:47:40.007997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:43.438 [2024-11-27 08:47:40.008075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.438 [2024-11-27 08:47:40.008106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:43.438 [2024-11-27 08:47:40.008129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.438 [2024-11-27 08:47:40.011004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.438 [2024-11-27 08:47:40.011193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:43.438 BaseBdev2 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.438 spare_malloc 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.438 spare_delay 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.438 [2024-11-27 08:47:40.079485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:43.438 [2024-11-27 08:47:40.079566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.438 [2024-11-27 08:47:40.079598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:43.438 [2024-11-27 08:47:40.079618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.438 [2024-11-27 08:47:40.082608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.438 [2024-11-27 08:47:40.082787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:43.438 spare 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.438 [2024-11-27 08:47:40.087676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.438 [2024-11-27 08:47:40.090213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.438 [2024-11-27 08:47:40.090605] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:43.438 [2024-11-27 08:47:40.090639] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:43.438 [2024-11-27 08:47:40.090953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:43.438 [2024-11-27 08:47:40.091179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:43.438 [2024-11-27 08:47:40.091196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:43.438 [2024-11-27 08:47:40.091409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.438 "name": "raid_bdev1", 00:15:43.438 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:15:43.438 "strip_size_kb": 0, 00:15:43.438 "state": "online", 00:15:43.438 "raid_level": "raid1", 00:15:43.438 "superblock": true, 00:15:43.438 "num_base_bdevs": 2, 00:15:43.438 "num_base_bdevs_discovered": 2, 00:15:43.438 "num_base_bdevs_operational": 2, 00:15:43.438 "base_bdevs_list": [ 00:15:43.438 { 00:15:43.438 "name": "BaseBdev1", 00:15:43.438 "uuid": "5cb74b90-0be8-5c62-bff9-9802d1ea84c5", 00:15:43.438 "is_configured": true, 00:15:43.438 "data_offset": 2048, 00:15:43.438 "data_size": 63488 00:15:43.438 }, 00:15:43.438 { 00:15:43.438 "name": "BaseBdev2", 00:15:43.438 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:15:43.438 "is_configured": true, 00:15:43.438 "data_offset": 2048, 00:15:43.438 "data_size": 63488 00:15:43.438 } 00:15:43.438 ] 00:15:43.438 }' 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.438 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.006 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:44.006 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.006 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.007 [2024-11-27 08:47:40.620227] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:44.007 08:47:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:44.575 [2024-11-27 08:47:41.064017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:44.575 /dev/nbd0 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local i 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # break 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.575 1+0 records in 00:15:44.575 1+0 records out 00:15:44.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621805 s, 6.6 MB/s 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # size=4096 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # return 0 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:44.575 08:47:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:51.141 63488+0 records in 00:15:51.141 63488+0 records out 00:15:51.141 32505856 bytes (33 MB, 31 MiB) copied, 6.3326 s, 5.1 MB/s 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:51.141 [2024-11-27 08:47:47.728999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.141 [2024-11-27 08:47:47.747290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.141 "name": "raid_bdev1", 00:15:51.141 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:15:51.141 "strip_size_kb": 0, 00:15:51.141 "state": "online", 00:15:51.141 "raid_level": "raid1", 00:15:51.141 "superblock": true, 00:15:51.141 "num_base_bdevs": 2, 00:15:51.141 "num_base_bdevs_discovered": 1, 00:15:51.141 "num_base_bdevs_operational": 1, 00:15:51.141 "base_bdevs_list": [ 00:15:51.141 { 00:15:51.141 "name": null, 00:15:51.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.141 "is_configured": false, 00:15:51.141 "data_offset": 0, 00:15:51.141 "data_size": 63488 00:15:51.141 }, 00:15:51.141 { 00:15:51.141 "name": "BaseBdev2", 00:15:51.141 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:15:51.141 "is_configured": true, 00:15:51.141 "data_offset": 2048, 00:15:51.141 "data_size": 63488 00:15:51.141 } 00:15:51.141 ] 00:15:51.141 }' 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.141 08:47:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.736 08:47:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:51.736 08:47:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.736 08:47:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.736 [2024-11-27 08:47:48.247490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:51.736 [2024-11-27 08:47:48.265076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:15:51.737 08:47:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.737 08:47:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:51.737 [2024-11-27 08:47:48.267852] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.673 "name": "raid_bdev1", 00:15:52.673 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:15:52.673 "strip_size_kb": 0, 00:15:52.673 "state": "online", 00:15:52.673 "raid_level": "raid1", 00:15:52.673 "superblock": true, 00:15:52.673 "num_base_bdevs": 2, 00:15:52.673 "num_base_bdevs_discovered": 2, 00:15:52.673 "num_base_bdevs_operational": 2, 00:15:52.673 "process": { 00:15:52.673 "type": "rebuild", 00:15:52.673 "target": "spare", 00:15:52.673 "progress": { 00:15:52.673 "blocks": 20480, 00:15:52.673 "percent": 32 00:15:52.673 } 00:15:52.673 }, 00:15:52.673 "base_bdevs_list": [ 00:15:52.673 { 00:15:52.673 "name": "spare", 00:15:52.673 "uuid": "fc152929-d0aa-5dc2-89d3-c26074083a8f", 00:15:52.673 "is_configured": true, 00:15:52.673 "data_offset": 2048, 00:15:52.673 "data_size": 63488 00:15:52.673 }, 00:15:52.673 { 00:15:52.673 "name": "BaseBdev2", 00:15:52.673 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:15:52.673 "is_configured": true, 00:15:52.673 "data_offset": 2048, 00:15:52.673 "data_size": 63488 00:15:52.673 } 00:15:52.673 ] 00:15:52.673 }' 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.673 08:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.673 [2024-11-27 08:47:49.425041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.930 [2024-11-27 08:47:49.478740] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:52.930 [2024-11-27 08:47:49.479017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.930 [2024-11-27 08:47:49.479150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.930 [2024-11-27 08:47:49.479210] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.930 "name": "raid_bdev1", 00:15:52.930 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:15:52.930 "strip_size_kb": 0, 00:15:52.930 "state": "online", 00:15:52.930 "raid_level": "raid1", 00:15:52.930 "superblock": true, 00:15:52.930 "num_base_bdevs": 2, 00:15:52.930 "num_base_bdevs_discovered": 1, 00:15:52.930 "num_base_bdevs_operational": 1, 00:15:52.930 "base_bdevs_list": [ 00:15:52.930 { 00:15:52.930 "name": null, 00:15:52.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.930 "is_configured": false, 00:15:52.930 "data_offset": 0, 00:15:52.930 "data_size": 63488 00:15:52.930 }, 00:15:52.930 { 00:15:52.930 "name": "BaseBdev2", 00:15:52.930 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:15:52.930 "is_configured": true, 00:15:52.930 "data_offset": 2048, 00:15:52.930 "data_size": 63488 00:15:52.930 } 00:15:52.930 ] 00:15:52.930 }' 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.930 08:47:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.494 "name": "raid_bdev1", 00:15:53.494 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:15:53.494 "strip_size_kb": 0, 00:15:53.494 "state": "online", 00:15:53.494 "raid_level": "raid1", 00:15:53.494 "superblock": true, 00:15:53.494 "num_base_bdevs": 2, 00:15:53.494 "num_base_bdevs_discovered": 1, 00:15:53.494 "num_base_bdevs_operational": 1, 00:15:53.494 "base_bdevs_list": [ 00:15:53.494 { 00:15:53.494 "name": null, 00:15:53.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.494 "is_configured": false, 00:15:53.494 "data_offset": 0, 00:15:53.494 "data_size": 63488 00:15:53.494 }, 00:15:53.494 { 00:15:53.494 "name": "BaseBdev2", 00:15:53.494 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:15:53.494 "is_configured": true, 00:15:53.494 "data_offset": 2048, 00:15:53.494 "data_size": 63488 00:15:53.494 } 00:15:53.494 ] 00:15:53.494 }' 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.494 [2024-11-27 08:47:50.189204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.494 [2024-11-27 08:47:50.205885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.494 08:47:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:53.494 [2024-11-27 08:47:50.208751] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.866 "name": "raid_bdev1", 00:15:54.866 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:15:54.866 "strip_size_kb": 0, 00:15:54.866 "state": "online", 00:15:54.866 "raid_level": "raid1", 00:15:54.866 "superblock": true, 00:15:54.866 "num_base_bdevs": 2, 00:15:54.866 "num_base_bdevs_discovered": 2, 00:15:54.866 "num_base_bdevs_operational": 2, 00:15:54.866 "process": { 00:15:54.866 "type": "rebuild", 00:15:54.866 "target": "spare", 00:15:54.866 "progress": { 00:15:54.866 "blocks": 20480, 00:15:54.866 "percent": 32 00:15:54.866 } 00:15:54.866 }, 00:15:54.866 "base_bdevs_list": [ 00:15:54.866 { 00:15:54.866 "name": "spare", 00:15:54.866 "uuid": "fc152929-d0aa-5dc2-89d3-c26074083a8f", 00:15:54.866 "is_configured": true, 00:15:54.866 "data_offset": 2048, 00:15:54.866 "data_size": 63488 00:15:54.866 }, 00:15:54.866 { 00:15:54.866 "name": "BaseBdev2", 00:15:54.866 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:15:54.866 "is_configured": true, 00:15:54.866 "data_offset": 2048, 00:15:54.866 "data_size": 63488 00:15:54.866 } 00:15:54.866 ] 00:15:54.866 }' 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:54.866 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=425 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.866 "name": "raid_bdev1", 00:15:54.866 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:15:54.866 "strip_size_kb": 0, 00:15:54.866 "state": "online", 00:15:54.866 "raid_level": "raid1", 00:15:54.866 "superblock": true, 00:15:54.866 "num_base_bdevs": 2, 00:15:54.866 "num_base_bdevs_discovered": 2, 00:15:54.866 "num_base_bdevs_operational": 2, 00:15:54.866 "process": { 00:15:54.866 "type": "rebuild", 00:15:54.866 "target": "spare", 00:15:54.866 "progress": { 00:15:54.866 "blocks": 22528, 00:15:54.866 "percent": 35 00:15:54.866 } 00:15:54.866 }, 00:15:54.866 "base_bdevs_list": [ 00:15:54.866 { 00:15:54.866 "name": "spare", 00:15:54.866 "uuid": "fc152929-d0aa-5dc2-89d3-c26074083a8f", 00:15:54.866 "is_configured": true, 00:15:54.866 "data_offset": 2048, 00:15:54.866 "data_size": 63488 00:15:54.866 }, 00:15:54.866 { 00:15:54.866 "name": "BaseBdev2", 00:15:54.866 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:15:54.866 "is_configured": true, 00:15:54.866 "data_offset": 2048, 00:15:54.866 "data_size": 63488 00:15:54.866 } 00:15:54.866 ] 00:15:54.866 }' 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.866 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.867 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.867 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.867 08:47:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:55.802 08:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.802 08:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.802 08:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.802 08:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.802 08:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.802 08:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.802 08:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.802 08:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.802 08:47:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.802 08:47:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.802 08:47:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.064 08:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.064 "name": "raid_bdev1", 00:15:56.064 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:15:56.064 "strip_size_kb": 0, 00:15:56.064 "state": "online", 00:15:56.064 "raid_level": "raid1", 00:15:56.064 "superblock": true, 00:15:56.064 "num_base_bdevs": 2, 00:15:56.064 "num_base_bdevs_discovered": 2, 00:15:56.064 "num_base_bdevs_operational": 2, 00:15:56.064 "process": { 00:15:56.064 "type": "rebuild", 00:15:56.064 "target": "spare", 00:15:56.064 "progress": { 00:15:56.064 "blocks": 45056, 00:15:56.064 "percent": 70 00:15:56.064 } 00:15:56.064 }, 00:15:56.064 "base_bdevs_list": [ 00:15:56.064 { 00:15:56.064 "name": "spare", 00:15:56.064 "uuid": "fc152929-d0aa-5dc2-89d3-c26074083a8f", 00:15:56.064 "is_configured": true, 00:15:56.064 "data_offset": 2048, 00:15:56.064 "data_size": 63488 00:15:56.064 }, 00:15:56.064 { 00:15:56.064 "name": "BaseBdev2", 00:15:56.064 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:15:56.064 "is_configured": true, 00:15:56.064 "data_offset": 2048, 00:15:56.064 "data_size": 63488 00:15:56.064 } 00:15:56.064 ] 00:15:56.064 }' 00:15:56.064 08:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.064 08:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.064 08:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.064 08:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.064 08:47:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:56.630 [2024-11-27 08:47:53.337619] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:56.630 [2024-11-27 08:47:53.337731] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:56.630 [2024-11-27 08:47:53.337900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.197 "name": "raid_bdev1", 00:15:57.197 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:15:57.197 "strip_size_kb": 0, 00:15:57.197 "state": "online", 00:15:57.197 "raid_level": "raid1", 00:15:57.197 "superblock": true, 00:15:57.197 "num_base_bdevs": 2, 00:15:57.197 "num_base_bdevs_discovered": 2, 00:15:57.197 "num_base_bdevs_operational": 2, 00:15:57.197 "base_bdevs_list": [ 00:15:57.197 { 00:15:57.197 "name": "spare", 00:15:57.197 "uuid": "fc152929-d0aa-5dc2-89d3-c26074083a8f", 00:15:57.197 "is_configured": true, 00:15:57.197 "data_offset": 2048, 00:15:57.197 "data_size": 63488 00:15:57.197 }, 00:15:57.197 { 00:15:57.197 "name": "BaseBdev2", 00:15:57.197 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:15:57.197 "is_configured": true, 00:15:57.197 "data_offset": 2048, 00:15:57.197 "data_size": 63488 00:15:57.197 } 00:15:57.197 ] 00:15:57.197 }' 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.197 "name": "raid_bdev1", 00:15:57.197 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:15:57.197 "strip_size_kb": 0, 00:15:57.197 "state": "online", 00:15:57.197 "raid_level": "raid1", 00:15:57.197 "superblock": true, 00:15:57.197 "num_base_bdevs": 2, 00:15:57.197 "num_base_bdevs_discovered": 2, 00:15:57.197 "num_base_bdevs_operational": 2, 00:15:57.197 "base_bdevs_list": [ 00:15:57.197 { 00:15:57.197 "name": "spare", 00:15:57.197 "uuid": "fc152929-d0aa-5dc2-89d3-c26074083a8f", 00:15:57.197 "is_configured": true, 00:15:57.197 "data_offset": 2048, 00:15:57.197 "data_size": 63488 00:15:57.197 }, 00:15:57.197 { 00:15:57.197 "name": "BaseBdev2", 00:15:57.197 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:15:57.197 "is_configured": true, 00:15:57.197 "data_offset": 2048, 00:15:57.197 "data_size": 63488 00:15:57.197 } 00:15:57.197 ] 00:15:57.197 }' 00:15:57.197 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.456 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.456 08:47:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.456 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.456 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:57.456 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.457 "name": "raid_bdev1", 00:15:57.457 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:15:57.457 "strip_size_kb": 0, 00:15:57.457 "state": "online", 00:15:57.457 "raid_level": "raid1", 00:15:57.457 "superblock": true, 00:15:57.457 "num_base_bdevs": 2, 00:15:57.457 "num_base_bdevs_discovered": 2, 00:15:57.457 "num_base_bdevs_operational": 2, 00:15:57.457 "base_bdevs_list": [ 00:15:57.457 { 00:15:57.457 "name": "spare", 00:15:57.457 "uuid": "fc152929-d0aa-5dc2-89d3-c26074083a8f", 00:15:57.457 "is_configured": true, 00:15:57.457 "data_offset": 2048, 00:15:57.457 "data_size": 63488 00:15:57.457 }, 00:15:57.457 { 00:15:57.457 "name": "BaseBdev2", 00:15:57.457 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:15:57.457 "is_configured": true, 00:15:57.457 "data_offset": 2048, 00:15:57.457 "data_size": 63488 00:15:57.457 } 00:15:57.457 ] 00:15:57.457 }' 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.457 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.024 [2024-11-27 08:47:54.495494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.024 [2024-11-27 08:47:54.495674] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.024 [2024-11-27 08:47:54.495821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.024 [2024-11-27 08:47:54.495925] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.024 [2024-11-27 08:47:54.495943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:58.024 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:58.282 /dev/nbd0 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local i 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # break 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.282 1+0 records in 00:15:58.282 1+0 records out 00:15:58.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000650088 s, 6.3 MB/s 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # size=4096 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # return 0 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:58.282 08:47:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:58.540 /dev/nbd1 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local i 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # break 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.540 1+0 records in 00:15:58.540 1+0 records out 00:15:58.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500338 s, 8.2 MB/s 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # size=4096 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # return 0 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:58.540 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:58.798 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:58.798 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.798 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:58.798 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:58.798 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:58.798 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.798 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:59.056 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:59.056 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:59.056 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:59.056 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.056 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.056 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:59.056 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:59.056 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.056 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.056 08:47:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.316 [2024-11-27 08:47:56.057570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:59.316 [2024-11-27 08:47:56.057652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.316 [2024-11-27 08:47:56.057691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:59.316 [2024-11-27 08:47:56.057708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.316 [2024-11-27 08:47:56.060820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.316 [2024-11-27 08:47:56.060865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:59.316 [2024-11-27 08:47:56.060993] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:59.316 [2024-11-27 08:47:56.061062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.316 [2024-11-27 08:47:56.061256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.316 spare 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.316 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.574 [2024-11-27 08:47:56.161421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:59.574 [2024-11-27 08:47:56.161503] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:59.574 [2024-11-27 08:47:56.161998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:15:59.575 [2024-11-27 08:47:56.162276] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:59.575 [2024-11-27 08:47:56.162309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:59.575 [2024-11-27 08:47:56.162610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.575 "name": "raid_bdev1", 00:15:59.575 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:15:59.575 "strip_size_kb": 0, 00:15:59.575 "state": "online", 00:15:59.575 "raid_level": "raid1", 00:15:59.575 "superblock": true, 00:15:59.575 "num_base_bdevs": 2, 00:15:59.575 "num_base_bdevs_discovered": 2, 00:15:59.575 "num_base_bdevs_operational": 2, 00:15:59.575 "base_bdevs_list": [ 00:15:59.575 { 00:15:59.575 "name": "spare", 00:15:59.575 "uuid": "fc152929-d0aa-5dc2-89d3-c26074083a8f", 00:15:59.575 "is_configured": true, 00:15:59.575 "data_offset": 2048, 00:15:59.575 "data_size": 63488 00:15:59.575 }, 00:15:59.575 { 00:15:59.575 "name": "BaseBdev2", 00:15:59.575 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:15:59.575 "is_configured": true, 00:15:59.575 "data_offset": 2048, 00:15:59.575 "data_size": 63488 00:15:59.575 } 00:15:59.575 ] 00:15:59.575 }' 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.575 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.214 "name": "raid_bdev1", 00:16:00.214 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:16:00.214 "strip_size_kb": 0, 00:16:00.214 "state": "online", 00:16:00.214 "raid_level": "raid1", 00:16:00.214 "superblock": true, 00:16:00.214 "num_base_bdevs": 2, 00:16:00.214 "num_base_bdevs_discovered": 2, 00:16:00.214 "num_base_bdevs_operational": 2, 00:16:00.214 "base_bdevs_list": [ 00:16:00.214 { 00:16:00.214 "name": "spare", 00:16:00.214 "uuid": "fc152929-d0aa-5dc2-89d3-c26074083a8f", 00:16:00.214 "is_configured": true, 00:16:00.214 "data_offset": 2048, 00:16:00.214 "data_size": 63488 00:16:00.214 }, 00:16:00.214 { 00:16:00.214 "name": "BaseBdev2", 00:16:00.214 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:16:00.214 "is_configured": true, 00:16:00.214 "data_offset": 2048, 00:16:00.214 "data_size": 63488 00:16:00.214 } 00:16:00.214 ] 00:16:00.214 }' 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.214 [2024-11-27 08:47:56.910836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.214 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.476 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.476 "name": "raid_bdev1", 00:16:00.476 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:16:00.476 "strip_size_kb": 0, 00:16:00.476 "state": "online", 00:16:00.476 "raid_level": "raid1", 00:16:00.476 "superblock": true, 00:16:00.476 "num_base_bdevs": 2, 00:16:00.476 "num_base_bdevs_discovered": 1, 00:16:00.476 "num_base_bdevs_operational": 1, 00:16:00.476 "base_bdevs_list": [ 00:16:00.476 { 00:16:00.476 "name": null, 00:16:00.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.476 "is_configured": false, 00:16:00.476 "data_offset": 0, 00:16:00.476 "data_size": 63488 00:16:00.476 }, 00:16:00.476 { 00:16:00.476 "name": "BaseBdev2", 00:16:00.476 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:16:00.476 "is_configured": true, 00:16:00.476 "data_offset": 2048, 00:16:00.476 "data_size": 63488 00:16:00.476 } 00:16:00.476 ] 00:16:00.476 }' 00:16:00.476 08:47:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.476 08:47:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.738 08:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:00.738 08:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.738 08:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.738 [2024-11-27 08:47:57.415016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.738 [2024-11-27 08:47:57.415333] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:00.738 [2024-11-27 08:47:57.415378] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:00.738 [2024-11-27 08:47:57.415453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.738 [2024-11-27 08:47:57.432453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:16:00.738 08:47:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.738 08:47:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:00.738 [2024-11-27 08:47:57.435131] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:02.115 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.115 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.115 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.115 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.115 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.115 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.115 08:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.115 08:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.115 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.115 08:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.115 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.115 "name": "raid_bdev1", 00:16:02.116 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:16:02.116 "strip_size_kb": 0, 00:16:02.116 "state": "online", 00:16:02.116 "raid_level": "raid1", 00:16:02.116 "superblock": true, 00:16:02.116 "num_base_bdevs": 2, 00:16:02.116 "num_base_bdevs_discovered": 2, 00:16:02.116 "num_base_bdevs_operational": 2, 00:16:02.116 "process": { 00:16:02.116 "type": "rebuild", 00:16:02.116 "target": "spare", 00:16:02.116 "progress": { 00:16:02.116 "blocks": 18432, 00:16:02.116 "percent": 29 00:16:02.116 } 00:16:02.116 }, 00:16:02.116 "base_bdevs_list": [ 00:16:02.116 { 00:16:02.116 "name": "spare", 00:16:02.116 "uuid": "fc152929-d0aa-5dc2-89d3-c26074083a8f", 00:16:02.116 "is_configured": true, 00:16:02.116 "data_offset": 2048, 00:16:02.116 "data_size": 63488 00:16:02.116 }, 00:16:02.116 { 00:16:02.116 "name": "BaseBdev2", 00:16:02.116 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:16:02.116 "is_configured": true, 00:16:02.116 "data_offset": 2048, 00:16:02.116 "data_size": 63488 00:16:02.116 } 00:16:02.116 ] 00:16:02.116 }' 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.116 [2024-11-27 08:47:58.600749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.116 [2024-11-27 08:47:58.646698] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:02.116 [2024-11-27 08:47:58.646810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.116 [2024-11-27 08:47:58.646834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.116 [2024-11-27 08:47:58.646849] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.116 "name": "raid_bdev1", 00:16:02.116 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:16:02.116 "strip_size_kb": 0, 00:16:02.116 "state": "online", 00:16:02.116 "raid_level": "raid1", 00:16:02.116 "superblock": true, 00:16:02.116 "num_base_bdevs": 2, 00:16:02.116 "num_base_bdevs_discovered": 1, 00:16:02.116 "num_base_bdevs_operational": 1, 00:16:02.116 "base_bdevs_list": [ 00:16:02.116 { 00:16:02.116 "name": null, 00:16:02.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.116 "is_configured": false, 00:16:02.116 "data_offset": 0, 00:16:02.116 "data_size": 63488 00:16:02.116 }, 00:16:02.116 { 00:16:02.116 "name": "BaseBdev2", 00:16:02.116 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:16:02.116 "is_configured": true, 00:16:02.116 "data_offset": 2048, 00:16:02.116 "data_size": 63488 00:16:02.116 } 00:16:02.116 ] 00:16:02.116 }' 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.116 08:47:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.682 08:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:02.682 08:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.682 08:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.682 [2024-11-27 08:47:59.176312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:02.682 [2024-11-27 08:47:59.176435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.682 [2024-11-27 08:47:59.176473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:02.682 [2024-11-27 08:47:59.176495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.682 [2024-11-27 08:47:59.177180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.682 [2024-11-27 08:47:59.177220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:02.682 [2024-11-27 08:47:59.177371] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:02.682 [2024-11-27 08:47:59.177399] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:02.682 [2024-11-27 08:47:59.177415] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:02.682 [2024-11-27 08:47:59.177454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.682 [2024-11-27 08:47:59.194244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:02.682 spare 00:16:02.682 08:47:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.682 08:47:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:02.682 [2024-11-27 08:47:59.196912] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:03.617 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.617 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.617 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.617 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.617 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.617 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.617 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.617 08:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.618 08:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.618 08:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.618 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.618 "name": "raid_bdev1", 00:16:03.618 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:16:03.618 "strip_size_kb": 0, 00:16:03.618 "state": "online", 00:16:03.618 "raid_level": "raid1", 00:16:03.618 "superblock": true, 00:16:03.618 "num_base_bdevs": 2, 00:16:03.618 "num_base_bdevs_discovered": 2, 00:16:03.618 "num_base_bdevs_operational": 2, 00:16:03.618 "process": { 00:16:03.618 "type": "rebuild", 00:16:03.618 "target": "spare", 00:16:03.618 "progress": { 00:16:03.618 "blocks": 20480, 00:16:03.618 "percent": 32 00:16:03.618 } 00:16:03.618 }, 00:16:03.618 "base_bdevs_list": [ 00:16:03.618 { 00:16:03.618 "name": "spare", 00:16:03.618 "uuid": "fc152929-d0aa-5dc2-89d3-c26074083a8f", 00:16:03.618 "is_configured": true, 00:16:03.618 "data_offset": 2048, 00:16:03.618 "data_size": 63488 00:16:03.618 }, 00:16:03.618 { 00:16:03.618 "name": "BaseBdev2", 00:16:03.618 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:16:03.618 "is_configured": true, 00:16:03.618 "data_offset": 2048, 00:16:03.618 "data_size": 63488 00:16:03.618 } 00:16:03.618 ] 00:16:03.618 }' 00:16:03.618 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.618 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.618 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.618 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.618 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:03.618 08:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.618 08:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.618 [2024-11-27 08:48:00.362359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.877 [2024-11-27 08:48:00.407877] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:03.877 [2024-11-27 08:48:00.407962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.877 [2024-11-27 08:48:00.407991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.877 [2024-11-27 08:48:00.408003] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.877 "name": "raid_bdev1", 00:16:03.877 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:16:03.877 "strip_size_kb": 0, 00:16:03.877 "state": "online", 00:16:03.877 "raid_level": "raid1", 00:16:03.877 "superblock": true, 00:16:03.877 "num_base_bdevs": 2, 00:16:03.877 "num_base_bdevs_discovered": 1, 00:16:03.877 "num_base_bdevs_operational": 1, 00:16:03.877 "base_bdevs_list": [ 00:16:03.877 { 00:16:03.877 "name": null, 00:16:03.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.877 "is_configured": false, 00:16:03.877 "data_offset": 0, 00:16:03.877 "data_size": 63488 00:16:03.877 }, 00:16:03.877 { 00:16:03.877 "name": "BaseBdev2", 00:16:03.877 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:16:03.877 "is_configured": true, 00:16:03.877 "data_offset": 2048, 00:16:03.877 "data_size": 63488 00:16:03.877 } 00:16:03.877 ] 00:16:03.877 }' 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.877 08:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.450 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.450 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.450 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.450 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.450 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.450 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.450 08:48:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.450 08:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.450 08:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.450 08:48:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.450 08:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.450 "name": "raid_bdev1", 00:16:04.450 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:16:04.450 "strip_size_kb": 0, 00:16:04.450 "state": "online", 00:16:04.450 "raid_level": "raid1", 00:16:04.450 "superblock": true, 00:16:04.450 "num_base_bdevs": 2, 00:16:04.450 "num_base_bdevs_discovered": 1, 00:16:04.450 "num_base_bdevs_operational": 1, 00:16:04.450 "base_bdevs_list": [ 00:16:04.450 { 00:16:04.450 "name": null, 00:16:04.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.450 "is_configured": false, 00:16:04.450 "data_offset": 0, 00:16:04.450 "data_size": 63488 00:16:04.450 }, 00:16:04.450 { 00:16:04.450 "name": "BaseBdev2", 00:16:04.450 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:16:04.450 "is_configured": true, 00:16:04.450 "data_offset": 2048, 00:16:04.450 "data_size": 63488 00:16:04.450 } 00:16:04.450 ] 00:16:04.450 }' 00:16:04.450 08:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.450 08:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.450 08:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.450 08:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.450 08:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:04.450 08:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.450 08:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.450 08:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.450 08:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:04.450 08:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.450 08:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.450 [2024-11-27 08:48:01.129292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:04.450 [2024-11-27 08:48:01.129392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.450 [2024-11-27 08:48:01.129433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:04.450 [2024-11-27 08:48:01.129463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.450 [2024-11-27 08:48:01.130115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.450 [2024-11-27 08:48:01.130157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:04.450 [2024-11-27 08:48:01.130297] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:04.450 [2024-11-27 08:48:01.130348] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:04.450 [2024-11-27 08:48:01.130379] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:04.450 [2024-11-27 08:48:01.130395] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:04.450 BaseBdev1 00:16:04.450 08:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.450 08:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:05.386 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:05.386 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.386 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.386 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.386 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.386 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:05.386 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.386 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.386 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.386 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.386 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.645 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.645 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.645 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.645 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.645 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.645 "name": "raid_bdev1", 00:16:05.645 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:16:05.645 "strip_size_kb": 0, 00:16:05.645 "state": "online", 00:16:05.645 "raid_level": "raid1", 00:16:05.645 "superblock": true, 00:16:05.645 "num_base_bdevs": 2, 00:16:05.645 "num_base_bdevs_discovered": 1, 00:16:05.645 "num_base_bdevs_operational": 1, 00:16:05.645 "base_bdevs_list": [ 00:16:05.645 { 00:16:05.645 "name": null, 00:16:05.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.645 "is_configured": false, 00:16:05.645 "data_offset": 0, 00:16:05.645 "data_size": 63488 00:16:05.645 }, 00:16:05.645 { 00:16:05.645 "name": "BaseBdev2", 00:16:05.645 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:16:05.645 "is_configured": true, 00:16:05.645 "data_offset": 2048, 00:16:05.645 "data_size": 63488 00:16:05.645 } 00:16:05.645 ] 00:16:05.645 }' 00:16:05.645 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.645 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.213 "name": "raid_bdev1", 00:16:06.213 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:16:06.213 "strip_size_kb": 0, 00:16:06.213 "state": "online", 00:16:06.213 "raid_level": "raid1", 00:16:06.213 "superblock": true, 00:16:06.213 "num_base_bdevs": 2, 00:16:06.213 "num_base_bdevs_discovered": 1, 00:16:06.213 "num_base_bdevs_operational": 1, 00:16:06.213 "base_bdevs_list": [ 00:16:06.213 { 00:16:06.213 "name": null, 00:16:06.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.213 "is_configured": false, 00:16:06.213 "data_offset": 0, 00:16:06.213 "data_size": 63488 00:16:06.213 }, 00:16:06.213 { 00:16:06.213 "name": "BaseBdev2", 00:16:06.213 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:16:06.213 "is_configured": true, 00:16:06.213 "data_offset": 2048, 00:16:06.213 "data_size": 63488 00:16:06.213 } 00:16:06.213 ] 00:16:06.213 }' 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.213 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.213 [2024-11-27 08:48:02.865968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.213 [2024-11-27 08:48:02.866429] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:06.213 [2024-11-27 08:48:02.866601] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:06.213 request: 00:16:06.213 { 00:16:06.213 "base_bdev": "BaseBdev1", 00:16:06.213 "raid_bdev": "raid_bdev1", 00:16:06.213 "method": "bdev_raid_add_base_bdev", 00:16:06.213 "req_id": 1 00:16:06.213 } 00:16:06.213 Got JSON-RPC error response 00:16:06.213 response: 00:16:06.213 { 00:16:06.213 "code": -22, 00:16:06.213 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:06.214 } 00:16:06.214 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:06.214 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:06.214 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:06.214 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:06.214 08:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:06.214 08:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:07.148 08:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.148 08:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.148 08:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.148 08:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.148 08:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.148 08:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:07.148 08:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.148 08:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.148 08:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.148 08:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.148 08:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.148 08:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.148 08:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.148 08:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.148 08:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.406 08:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.406 "name": "raid_bdev1", 00:16:07.406 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:16:07.406 "strip_size_kb": 0, 00:16:07.406 "state": "online", 00:16:07.406 "raid_level": "raid1", 00:16:07.406 "superblock": true, 00:16:07.406 "num_base_bdevs": 2, 00:16:07.406 "num_base_bdevs_discovered": 1, 00:16:07.406 "num_base_bdevs_operational": 1, 00:16:07.406 "base_bdevs_list": [ 00:16:07.406 { 00:16:07.406 "name": null, 00:16:07.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.406 "is_configured": false, 00:16:07.406 "data_offset": 0, 00:16:07.406 "data_size": 63488 00:16:07.406 }, 00:16:07.406 { 00:16:07.406 "name": "BaseBdev2", 00:16:07.406 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:16:07.406 "is_configured": true, 00:16:07.406 "data_offset": 2048, 00:16:07.406 "data_size": 63488 00:16:07.406 } 00:16:07.406 ] 00:16:07.406 }' 00:16:07.406 08:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.406 08:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 08:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.665 08:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.665 08:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.665 08:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.665 08:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.665 08:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.665 08:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.665 08:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.665 08:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.665 08:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.923 08:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.923 "name": "raid_bdev1", 00:16:07.923 "uuid": "a732e7d3-f969-4263-818b-ab1aadf8db38", 00:16:07.923 "strip_size_kb": 0, 00:16:07.923 "state": "online", 00:16:07.923 "raid_level": "raid1", 00:16:07.923 "superblock": true, 00:16:07.923 "num_base_bdevs": 2, 00:16:07.923 "num_base_bdevs_discovered": 1, 00:16:07.923 "num_base_bdevs_operational": 1, 00:16:07.923 "base_bdevs_list": [ 00:16:07.923 { 00:16:07.923 "name": null, 00:16:07.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.924 "is_configured": false, 00:16:07.924 "data_offset": 0, 00:16:07.924 "data_size": 63488 00:16:07.924 }, 00:16:07.924 { 00:16:07.924 "name": "BaseBdev2", 00:16:07.924 "uuid": "e7956105-bb1f-5650-bb33-9f328228d563", 00:16:07.924 "is_configured": true, 00:16:07.924 "data_offset": 2048, 00:16:07.924 "data_size": 63488 00:16:07.924 } 00:16:07.924 ] 00:16:07.924 }' 00:16:07.924 08:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.924 08:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.924 08:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.924 08:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.924 08:48:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76075 00:16:07.924 08:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' -z 76075 ']' 00:16:07.924 08:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # kill -0 76075 00:16:07.924 08:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # uname 00:16:07.924 08:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:16:07.924 08:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 76075 00:16:07.924 killing process with pid 76075 00:16:07.924 Received shutdown signal, test time was about 60.000000 seconds 00:16:07.924 00:16:07.924 Latency(us) 00:16:07.924 [2024-11-27T08:48:04.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.924 [2024-11-27T08:48:04.684Z] =================================================================================================================== 00:16:07.924 [2024-11-27T08:48:04.684Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:07.924 08:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:16:07.924 08:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:16:07.924 08:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # echo 'killing process with pid 76075' 00:16:07.924 08:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # kill 76075 00:16:07.924 [2024-11-27 08:48:04.597847] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:07.924 08:48:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@975 -- # wait 76075 00:16:07.924 [2024-11-27 08:48:04.598033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.924 [2024-11-27 08:48:04.598111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.924 [2024-11-27 08:48:04.598131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:08.218 [2024-11-27 08:48:04.879277] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:09.593 08:48:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:09.593 00:16:09.593 real 0m27.159s 00:16:09.593 user 0m33.330s 00:16:09.593 sys 0m3.937s 00:16:09.593 08:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # xtrace_disable 00:16:09.593 08:48:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.593 ************************************ 00:16:09.593 END TEST raid_rebuild_test_sb 00:16:09.593 ************************************ 00:16:09.593 08:48:06 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:16:09.593 08:48:06 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:16:09.593 08:48:06 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:16:09.594 08:48:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:09.594 ************************************ 00:16:09.594 START TEST raid_rebuild_test_io 00:16:09.594 ************************************ 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # raid_rebuild_test raid1 2 false true true 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76843 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76843 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@832 -- # '[' -z 76843 ']' 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local max_retries=100 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@841 -- # xtrace_disable 00:16:09.594 08:48:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.594 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:09.594 Zero copy mechanism will not be used. 00:16:09.594 [2024-11-27 08:48:06.154223] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:16:09.594 [2024-11-27 08:48:06.154411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76843 ] 00:16:09.594 [2024-11-27 08:48:06.332954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.852 [2024-11-27 08:48:06.472467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.110 [2024-11-27 08:48:06.692136] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.110 [2024-11-27 08:48:06.692174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # return 0 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.677 BaseBdev1_malloc 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.677 [2024-11-27 08:48:07.185504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:10.677 [2024-11-27 08:48:07.185590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.677 [2024-11-27 08:48:07.185626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:10.677 [2024-11-27 08:48:07.185645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.677 [2024-11-27 08:48:07.188653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.677 [2024-11-27 08:48:07.188704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:10.677 BaseBdev1 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.677 BaseBdev2_malloc 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.677 [2024-11-27 08:48:07.242470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:10.677 [2024-11-27 08:48:07.242564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.677 [2024-11-27 08:48:07.242597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:10.677 [2024-11-27 08:48:07.242619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.677 [2024-11-27 08:48:07.245516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.677 [2024-11-27 08:48:07.245566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:10.677 BaseBdev2 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.677 spare_malloc 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.677 spare_delay 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.677 [2024-11-27 08:48:07.321707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.677 [2024-11-27 08:48:07.321783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.677 [2024-11-27 08:48:07.321813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:10.677 [2024-11-27 08:48:07.321832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.677 [2024-11-27 08:48:07.324818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.677 spare 00:16:10.677 [2024-11-27 08:48:07.324991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.677 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.677 [2024-11-27 08:48:07.329859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.677 [2024-11-27 08:48:07.332443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.677 [2024-11-27 08:48:07.332574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:10.677 [2024-11-27 08:48:07.332598] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:10.677 [2024-11-27 08:48:07.332911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:10.677 [2024-11-27 08:48:07.333120] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:10.677 [2024-11-27 08:48:07.333139] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:10.678 [2024-11-27 08:48:07.333328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.678 "name": "raid_bdev1", 00:16:10.678 "uuid": "e97fbe37-7287-4901-805b-0d3741b16580", 00:16:10.678 "strip_size_kb": 0, 00:16:10.678 "state": "online", 00:16:10.678 "raid_level": "raid1", 00:16:10.678 "superblock": false, 00:16:10.678 "num_base_bdevs": 2, 00:16:10.678 "num_base_bdevs_discovered": 2, 00:16:10.678 "num_base_bdevs_operational": 2, 00:16:10.678 "base_bdevs_list": [ 00:16:10.678 { 00:16:10.678 "name": "BaseBdev1", 00:16:10.678 "uuid": "1a99ab05-29ed-55d5-abb2-83b3a4d55212", 00:16:10.678 "is_configured": true, 00:16:10.678 "data_offset": 0, 00:16:10.678 "data_size": 65536 00:16:10.678 }, 00:16:10.678 { 00:16:10.678 "name": "BaseBdev2", 00:16:10.678 "uuid": "5051e202-071f-5727-aac8-4f19f7f00fca", 00:16:10.678 "is_configured": true, 00:16:10.678 "data_offset": 0, 00:16:10.678 "data_size": 65536 00:16:10.678 } 00:16:10.678 ] 00:16:10.678 }' 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.678 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:11.245 [2024-11-27 08:48:07.834458] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.245 [2024-11-27 08:48:07.937990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.245 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.245 "name": "raid_bdev1", 00:16:11.245 "uuid": "e97fbe37-7287-4901-805b-0d3741b16580", 00:16:11.245 "strip_size_kb": 0, 00:16:11.245 "state": "online", 00:16:11.245 "raid_level": "raid1", 00:16:11.245 "superblock": false, 00:16:11.245 "num_base_bdevs": 2, 00:16:11.245 "num_base_bdevs_discovered": 1, 00:16:11.245 "num_base_bdevs_operational": 1, 00:16:11.245 "base_bdevs_list": [ 00:16:11.245 { 00:16:11.245 "name": null, 00:16:11.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.246 "is_configured": false, 00:16:11.246 "data_offset": 0, 00:16:11.246 "data_size": 65536 00:16:11.246 }, 00:16:11.246 { 00:16:11.246 "name": "BaseBdev2", 00:16:11.246 "uuid": "5051e202-071f-5727-aac8-4f19f7f00fca", 00:16:11.246 "is_configured": true, 00:16:11.246 "data_offset": 0, 00:16:11.246 "data_size": 65536 00:16:11.246 } 00:16:11.246 ] 00:16:11.246 }' 00:16:11.246 08:48:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.246 08:48:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.505 [2024-11-27 08:48:08.071184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:11.505 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:11.505 Zero copy mechanism will not be used. 00:16:11.505 Running I/O for 60 seconds... 00:16:11.763 08:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:11.763 08:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.763 08:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.763 [2024-11-27 08:48:08.461776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.763 08:48:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.763 08:48:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:12.040 [2024-11-27 08:48:08.524539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:12.040 [2024-11-27 08:48:08.527341] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:12.040 [2024-11-27 08:48:08.646019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:12.304 [2024-11-27 08:48:08.873326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:12.563 193.00 IOPS, 579.00 MiB/s [2024-11-27T08:48:09.323Z] [2024-11-27 08:48:09.137645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:12.563 [2024-11-27 08:48:09.271187] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:12.563 [2024-11-27 08:48:09.271665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:12.823 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.823 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.823 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.823 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.823 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.823 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.823 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.823 08:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.823 08:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.823 08:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.823 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.823 "name": "raid_bdev1", 00:16:12.823 "uuid": "e97fbe37-7287-4901-805b-0d3741b16580", 00:16:12.823 "strip_size_kb": 0, 00:16:12.823 "state": "online", 00:16:12.823 "raid_level": "raid1", 00:16:12.823 "superblock": false, 00:16:12.823 "num_base_bdevs": 2, 00:16:12.823 "num_base_bdevs_discovered": 2, 00:16:12.823 "num_base_bdevs_operational": 2, 00:16:12.823 "process": { 00:16:12.823 "type": "rebuild", 00:16:12.823 "target": "spare", 00:16:12.823 "progress": { 00:16:12.823 "blocks": 12288, 00:16:12.823 "percent": 18 00:16:12.823 } 00:16:12.823 }, 00:16:12.823 "base_bdevs_list": [ 00:16:12.823 { 00:16:12.823 "name": "spare", 00:16:12.823 "uuid": "a94dae7e-f0c4-533b-a762-14c17aa4a357", 00:16:12.823 "is_configured": true, 00:16:12.823 "data_offset": 0, 00:16:12.823 "data_size": 65536 00:16:12.823 }, 00:16:12.823 { 00:16:12.823 "name": "BaseBdev2", 00:16:12.823 "uuid": "5051e202-071f-5727-aac8-4f19f7f00fca", 00:16:12.823 "is_configured": true, 00:16:12.823 "data_offset": 0, 00:16:12.823 "data_size": 65536 00:16:12.823 } 00:16:12.823 ] 00:16:12.823 }' 00:16:12.823 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.081 [2024-11-27 08:48:09.610531] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:13.081 [2024-11-27 08:48:09.618889] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:13.081 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.081 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.081 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.081 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:13.081 08:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.081 08:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.081 [2024-11-27 08:48:09.680167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.081 [2024-11-27 08:48:09.745741] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:13.339 [2024-11-27 08:48:09.853917] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:13.339 [2024-11-27 08:48:09.857082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.339 [2024-11-27 08:48:09.857121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.339 [2024-11-27 08:48:09.857142] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:13.339 [2024-11-27 08:48:09.905615] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.339 "name": "raid_bdev1", 00:16:13.339 "uuid": "e97fbe37-7287-4901-805b-0d3741b16580", 00:16:13.339 "strip_size_kb": 0, 00:16:13.339 "state": "online", 00:16:13.339 "raid_level": "raid1", 00:16:13.339 "superblock": false, 00:16:13.339 "num_base_bdevs": 2, 00:16:13.339 "num_base_bdevs_discovered": 1, 00:16:13.339 "num_base_bdevs_operational": 1, 00:16:13.339 "base_bdevs_list": [ 00:16:13.339 { 00:16:13.339 "name": null, 00:16:13.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.339 "is_configured": false, 00:16:13.339 "data_offset": 0, 00:16:13.339 "data_size": 65536 00:16:13.339 }, 00:16:13.339 { 00:16:13.339 "name": "BaseBdev2", 00:16:13.339 "uuid": "5051e202-071f-5727-aac8-4f19f7f00fca", 00:16:13.339 "is_configured": true, 00:16:13.339 "data_offset": 0, 00:16:13.339 "data_size": 65536 00:16:13.339 } 00:16:13.339 ] 00:16:13.339 }' 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.339 08:48:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.907 139.00 IOPS, 417.00 MiB/s [2024-11-27T08:48:10.667Z] 08:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.907 "name": "raid_bdev1", 00:16:13.907 "uuid": "e97fbe37-7287-4901-805b-0d3741b16580", 00:16:13.907 "strip_size_kb": 0, 00:16:13.907 "state": "online", 00:16:13.907 "raid_level": "raid1", 00:16:13.907 "superblock": false, 00:16:13.907 "num_base_bdevs": 2, 00:16:13.907 "num_base_bdevs_discovered": 1, 00:16:13.907 "num_base_bdevs_operational": 1, 00:16:13.907 "base_bdevs_list": [ 00:16:13.907 { 00:16:13.907 "name": null, 00:16:13.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.907 "is_configured": false, 00:16:13.907 "data_offset": 0, 00:16:13.907 "data_size": 65536 00:16:13.907 }, 00:16:13.907 { 00:16:13.907 "name": "BaseBdev2", 00:16:13.907 "uuid": "5051e202-071f-5727-aac8-4f19f7f00fca", 00:16:13.907 "is_configured": true, 00:16:13.907 "data_offset": 0, 00:16:13.907 "data_size": 65536 00:16:13.907 } 00:16:13.907 ] 00:16:13.907 }' 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.907 08:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.907 [2024-11-27 08:48:10.626911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.165 08:48:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.165 08:48:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:14.165 [2024-11-27 08:48:10.698942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:14.165 [2024-11-27 08:48:10.702027] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.424 [2024-11-27 08:48:10.992753] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:14.424 [2024-11-27 08:48:10.993315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:14.683 158.00 IOPS, 474.00 MiB/s [2024-11-27T08:48:11.443Z] [2024-11-27 08:48:11.328410] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:14.941 [2024-11-27 08:48:11.440525] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:14.941 [2024-11-27 08:48:11.678949] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:14.941 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.941 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.941 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.941 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.941 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.941 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.941 08:48:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.941 08:48:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.941 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.199 08:48:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.199 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.199 "name": "raid_bdev1", 00:16:15.199 "uuid": "e97fbe37-7287-4901-805b-0d3741b16580", 00:16:15.199 "strip_size_kb": 0, 00:16:15.199 "state": "online", 00:16:15.199 "raid_level": "raid1", 00:16:15.199 "superblock": false, 00:16:15.199 "num_base_bdevs": 2, 00:16:15.199 "num_base_bdevs_discovered": 2, 00:16:15.199 "num_base_bdevs_operational": 2, 00:16:15.199 "process": { 00:16:15.199 "type": "rebuild", 00:16:15.199 "target": "spare", 00:16:15.199 "progress": { 00:16:15.199 "blocks": 14336, 00:16:15.199 "percent": 21 00:16:15.199 } 00:16:15.199 }, 00:16:15.199 "base_bdevs_list": [ 00:16:15.199 { 00:16:15.199 "name": "spare", 00:16:15.199 "uuid": "a94dae7e-f0c4-533b-a762-14c17aa4a357", 00:16:15.199 "is_configured": true, 00:16:15.199 "data_offset": 0, 00:16:15.199 "data_size": 65536 00:16:15.200 }, 00:16:15.200 { 00:16:15.200 "name": "BaseBdev2", 00:16:15.200 "uuid": "5051e202-071f-5727-aac8-4f19f7f00fca", 00:16:15.200 "is_configured": true, 00:16:15.200 "data_offset": 0, 00:16:15.200 "data_size": 65536 00:16:15.200 } 00:16:15.200 ] 00:16:15.200 }' 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.200 [2024-11-27 08:48:11.790211] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:15.200 [2024-11-27 08:48:11.790721] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=445 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.200 "name": "raid_bdev1", 00:16:15.200 "uuid": "e97fbe37-7287-4901-805b-0d3741b16580", 00:16:15.200 "strip_size_kb": 0, 00:16:15.200 "state": "online", 00:16:15.200 "raid_level": "raid1", 00:16:15.200 "superblock": false, 00:16:15.200 "num_base_bdevs": 2, 00:16:15.200 "num_base_bdevs_discovered": 2, 00:16:15.200 "num_base_bdevs_operational": 2, 00:16:15.200 "process": { 00:16:15.200 "type": "rebuild", 00:16:15.200 "target": "spare", 00:16:15.200 "progress": { 00:16:15.200 "blocks": 16384, 00:16:15.200 "percent": 25 00:16:15.200 } 00:16:15.200 }, 00:16:15.200 "base_bdevs_list": [ 00:16:15.200 { 00:16:15.200 "name": "spare", 00:16:15.200 "uuid": "a94dae7e-f0c4-533b-a762-14c17aa4a357", 00:16:15.200 "is_configured": true, 00:16:15.200 "data_offset": 0, 00:16:15.200 "data_size": 65536 00:16:15.200 }, 00:16:15.200 { 00:16:15.200 "name": "BaseBdev2", 00:16:15.200 "uuid": "5051e202-071f-5727-aac8-4f19f7f00fca", 00:16:15.200 "is_configured": true, 00:16:15.200 "data_offset": 0, 00:16:15.200 "data_size": 65536 00:16:15.200 } 00:16:15.200 ] 00:16:15.200 }' 00:16:15.200 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.459 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.459 08:48:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.459 08:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.459 08:48:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.718 140.75 IOPS, 422.25 MiB/s [2024-11-27T08:48:12.478Z] [2024-11-27 08:48:12.237214] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:15.718 [2024-11-27 08:48:12.237936] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:16.292 [2024-11-27 08:48:12.945625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:16.292 08:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.292 08:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.292 08:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.292 08:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.292 08:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.292 08:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.292 08:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.292 08:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.292 08:48:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.292 08:48:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.552 08:48:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.552 [2024-11-27 08:48:13.067446] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:16.552 121.80 IOPS, 365.40 MiB/s [2024-11-27T08:48:13.312Z] 08:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.552 "name": "raid_bdev1", 00:16:16.552 "uuid": "e97fbe37-7287-4901-805b-0d3741b16580", 00:16:16.552 "strip_size_kb": 0, 00:16:16.552 "state": "online", 00:16:16.552 "raid_level": "raid1", 00:16:16.552 "superblock": false, 00:16:16.552 "num_base_bdevs": 2, 00:16:16.552 "num_base_bdevs_discovered": 2, 00:16:16.552 "num_base_bdevs_operational": 2, 00:16:16.552 "process": { 00:16:16.552 "type": "rebuild", 00:16:16.552 "target": "spare", 00:16:16.552 "progress": { 00:16:16.552 "blocks": 32768, 00:16:16.552 "percent": 50 00:16:16.552 } 00:16:16.552 }, 00:16:16.552 "base_bdevs_list": [ 00:16:16.552 { 00:16:16.552 "name": "spare", 00:16:16.552 "uuid": "a94dae7e-f0c4-533b-a762-14c17aa4a357", 00:16:16.552 "is_configured": true, 00:16:16.552 "data_offset": 0, 00:16:16.552 "data_size": 65536 00:16:16.552 }, 00:16:16.552 { 00:16:16.552 "name": "BaseBdev2", 00:16:16.552 "uuid": "5051e202-071f-5727-aac8-4f19f7f00fca", 00:16:16.552 "is_configured": true, 00:16:16.552 "data_offset": 0, 00:16:16.552 "data_size": 65536 00:16:16.552 } 00:16:16.552 ] 00:16:16.552 }' 00:16:16.552 08:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.552 08:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.552 08:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.552 08:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.552 08:48:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.811 [2024-11-27 08:48:13.535691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:17.639 108.83 IOPS, 326.50 MiB/s [2024-11-27T08:48:14.399Z] 08:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.639 "name": "raid_bdev1", 00:16:17.639 "uuid": "e97fbe37-7287-4901-805b-0d3741b16580", 00:16:17.639 "strip_size_kb": 0, 00:16:17.639 "state": "online", 00:16:17.639 "raid_level": "raid1", 00:16:17.639 "superblock": false, 00:16:17.639 "num_base_bdevs": 2, 00:16:17.639 "num_base_bdevs_discovered": 2, 00:16:17.639 "num_base_bdevs_operational": 2, 00:16:17.639 "process": { 00:16:17.639 "type": "rebuild", 00:16:17.639 "target": "spare", 00:16:17.639 "progress": { 00:16:17.639 "blocks": 51200, 00:16:17.639 "percent": 78 00:16:17.639 } 00:16:17.639 }, 00:16:17.639 "base_bdevs_list": [ 00:16:17.639 { 00:16:17.639 "name": "spare", 00:16:17.639 "uuid": "a94dae7e-f0c4-533b-a762-14c17aa4a357", 00:16:17.639 "is_configured": true, 00:16:17.639 "data_offset": 0, 00:16:17.639 "data_size": 65536 00:16:17.639 }, 00:16:17.639 { 00:16:17.639 "name": "BaseBdev2", 00:16:17.639 "uuid": "5051e202-071f-5727-aac8-4f19f7f00fca", 00:16:17.639 "is_configured": true, 00:16:17.639 "data_offset": 0, 00:16:17.639 "data_size": 65536 00:16:17.639 } 00:16:17.639 ] 00:16:17.639 }' 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.639 08:48:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.207 [2024-11-27 08:48:14.916096] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:18.465 [2024-11-27 08:48:15.016128] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:18.465 [2024-11-27 08:48:15.019593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.724 98.29 IOPS, 294.86 MiB/s [2024-11-27T08:48:15.484Z] 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.724 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.724 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.724 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.724 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.724 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.724 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.724 08:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.724 08:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.724 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.724 08:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.724 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.724 "name": "raid_bdev1", 00:16:18.724 "uuid": "e97fbe37-7287-4901-805b-0d3741b16580", 00:16:18.724 "strip_size_kb": 0, 00:16:18.724 "state": "online", 00:16:18.724 "raid_level": "raid1", 00:16:18.724 "superblock": false, 00:16:18.724 "num_base_bdevs": 2, 00:16:18.724 "num_base_bdevs_discovered": 2, 00:16:18.724 "num_base_bdevs_operational": 2, 00:16:18.724 "base_bdevs_list": [ 00:16:18.724 { 00:16:18.724 "name": "spare", 00:16:18.724 "uuid": "a94dae7e-f0c4-533b-a762-14c17aa4a357", 00:16:18.724 "is_configured": true, 00:16:18.724 "data_offset": 0, 00:16:18.724 "data_size": 65536 00:16:18.724 }, 00:16:18.724 { 00:16:18.724 "name": "BaseBdev2", 00:16:18.724 "uuid": "5051e202-071f-5727-aac8-4f19f7f00fca", 00:16:18.724 "is_configured": true, 00:16:18.724 "data_offset": 0, 00:16:18.724 "data_size": 65536 00:16:18.724 } 00:16:18.724 ] 00:16:18.724 }' 00:16:18.724 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.724 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:18.724 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.983 "name": "raid_bdev1", 00:16:18.983 "uuid": "e97fbe37-7287-4901-805b-0d3741b16580", 00:16:18.983 "strip_size_kb": 0, 00:16:18.983 "state": "online", 00:16:18.983 "raid_level": "raid1", 00:16:18.983 "superblock": false, 00:16:18.983 "num_base_bdevs": 2, 00:16:18.983 "num_base_bdevs_discovered": 2, 00:16:18.983 "num_base_bdevs_operational": 2, 00:16:18.983 "base_bdevs_list": [ 00:16:18.983 { 00:16:18.983 "name": "spare", 00:16:18.983 "uuid": "a94dae7e-f0c4-533b-a762-14c17aa4a357", 00:16:18.983 "is_configured": true, 00:16:18.983 "data_offset": 0, 00:16:18.983 "data_size": 65536 00:16:18.983 }, 00:16:18.983 { 00:16:18.983 "name": "BaseBdev2", 00:16:18.983 "uuid": "5051e202-071f-5727-aac8-4f19f7f00fca", 00:16:18.983 "is_configured": true, 00:16:18.983 "data_offset": 0, 00:16:18.983 "data_size": 65536 00:16:18.983 } 00:16:18.983 ] 00:16:18.983 }' 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.983 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:18.984 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.984 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.984 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.984 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.984 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.984 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.984 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.984 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.984 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.984 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.984 08:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.984 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.984 08:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.984 08:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.242 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.242 "name": "raid_bdev1", 00:16:19.242 "uuid": "e97fbe37-7287-4901-805b-0d3741b16580", 00:16:19.242 "strip_size_kb": 0, 00:16:19.242 "state": "online", 00:16:19.242 "raid_level": "raid1", 00:16:19.242 "superblock": false, 00:16:19.242 "num_base_bdevs": 2, 00:16:19.242 "num_base_bdevs_discovered": 2, 00:16:19.242 "num_base_bdevs_operational": 2, 00:16:19.242 "base_bdevs_list": [ 00:16:19.242 { 00:16:19.242 "name": "spare", 00:16:19.242 "uuid": "a94dae7e-f0c4-533b-a762-14c17aa4a357", 00:16:19.242 "is_configured": true, 00:16:19.242 "data_offset": 0, 00:16:19.242 "data_size": 65536 00:16:19.242 }, 00:16:19.242 { 00:16:19.242 "name": "BaseBdev2", 00:16:19.242 "uuid": "5051e202-071f-5727-aac8-4f19f7f00fca", 00:16:19.242 "is_configured": true, 00:16:19.242 "data_offset": 0, 00:16:19.242 "data_size": 65536 00:16:19.242 } 00:16:19.242 ] 00:16:19.242 }' 00:16:19.242 08:48:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.242 08:48:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.501 89.50 IOPS, 268.50 MiB/s [2024-11-27T08:48:16.261Z] 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:19.501 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.501 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.759 [2024-11-27 08:48:16.261470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.759 [2024-11-27 08:48:16.261505] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.759 00:16:19.759 Latency(us) 00:16:19.759 [2024-11-27T08:48:16.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.759 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:19.759 raid_bdev1 : 8.27 87.41 262.24 0.00 0.00 15365.62 271.83 117726.49 00:16:19.759 [2024-11-27T08:48:16.519Z] =================================================================================================================== 00:16:19.759 [2024-11-27T08:48:16.519Z] Total : 87.41 262.24 0.00 0.00 15365.62 271.83 117726.49 00:16:19.759 [2024-11-27 08:48:16.363898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.759 [2024-11-27 08:48:16.364118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.759 [2024-11-27 08:48:16.364288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.759 [2024-11-27 08:48:16.364485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:19.759 { 00:16:19.759 "results": [ 00:16:19.759 { 00:16:19.759 "job": "raid_bdev1", 00:16:19.759 "core_mask": "0x1", 00:16:19.759 "workload": "randrw", 00:16:19.759 "percentage": 50, 00:16:19.759 "status": "finished", 00:16:19.759 "queue_depth": 2, 00:16:19.759 "io_size": 3145728, 00:16:19.759 "runtime": 8.271148, 00:16:19.759 "iops": 87.41229149810884, 00:16:19.759 "mibps": 262.2368744943265, 00:16:19.759 "io_failed": 0, 00:16:19.759 "io_timeout": 0, 00:16:19.759 "avg_latency_us": 15365.616356092041, 00:16:19.759 "min_latency_us": 271.82545454545453, 00:16:19.759 "max_latency_us": 117726.48727272727 00:16:19.759 } 00:16:19.759 ], 00:16:19.759 "core_count": 1 00:16:19.759 } 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:19.759 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:20.018 /dev/nbd0 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local i 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # break 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.277 1+0 records in 00:16:20.277 1+0 records out 00:16:20.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294926 s, 13.9 MB/s 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # size=4096 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # return 0 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:20.277 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.278 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:20.278 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:20.278 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:20.278 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.278 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:20.278 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:20.278 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:20.278 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:20.278 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:20.278 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:20.278 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.278 08:48:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:20.580 /dev/nbd1 00:16:20.580 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:20.580 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:20.580 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:16:20.580 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local i 00:16:20.580 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:16:20.581 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:16:20.581 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:16:20.581 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # break 00:16:20.581 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:16:20.581 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:16:20.581 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.581 1+0 records in 00:16:20.581 1+0 records out 00:16:20.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351823 s, 11.6 MB/s 00:16:20.581 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.581 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # size=4096 00:16:20.581 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.581 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:16:20.581 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # return 0 00:16:20.581 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:20.581 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.581 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:20.849 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:20.849 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.849 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:20.849 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:20.849 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:20.849 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.849 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:21.108 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:21.108 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:21.108 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:21.108 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:21.108 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:21.108 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:21.108 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:21.108 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:21.108 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:21.108 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.108 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:21.108 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:21.108 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:21.108 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:21.108 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:21.367 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:21.367 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:21.367 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:21.367 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:21.367 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:21.367 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:21.367 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:21.367 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:21.367 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:21.367 08:48:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76843 00:16:21.367 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # '[' -z 76843 ']' 00:16:21.367 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # kill -0 76843 00:16:21.367 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # uname 00:16:21.367 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:16:21.367 08:48:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 76843 00:16:21.367 killing process with pid 76843 00:16:21.367 Received shutdown signal, test time was about 9.949065 seconds 00:16:21.367 00:16:21.367 Latency(us) 00:16:21.367 [2024-11-27T08:48:18.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.367 [2024-11-27T08:48:18.127Z] =================================================================================================================== 00:16:21.367 [2024-11-27T08:48:18.127Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:21.367 08:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:16:21.367 08:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:16:21.367 08:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # echo 'killing process with pid 76843' 00:16:21.367 08:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # kill 76843 00:16:21.367 08:48:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@975 -- # wait 76843 00:16:21.368 [2024-11-27 08:48:18.023282] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.626 [2024-11-27 08:48:18.228265] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:23.004 00:16:23.004 real 0m13.321s 00:16:23.004 user 0m17.438s 00:16:23.004 sys 0m1.481s 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # xtrace_disable 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.004 ************************************ 00:16:23.004 END TEST raid_rebuild_test_io 00:16:23.004 ************************************ 00:16:23.004 08:48:19 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:16:23.004 08:48:19 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:16:23.004 08:48:19 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:16:23.004 08:48:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.004 ************************************ 00:16:23.004 START TEST raid_rebuild_test_sb_io 00:16:23.004 ************************************ 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # raid_rebuild_test raid1 2 true true true 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77230 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77230 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@832 -- # '[' -z 77230 ']' 00:16:23.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local max_retries=100 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@841 -- # xtrace_disable 00:16:23.004 08:48:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.004 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:23.004 Zero copy mechanism will not be used. 00:16:23.004 [2024-11-27 08:48:19.561043] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:16:23.004 [2024-11-27 08:48:19.561277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77230 ] 00:16:23.004 [2024-11-27 08:48:19.749126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.264 [2024-11-27 08:48:19.889498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.523 [2024-11-27 08:48:20.111462] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.523 [2024-11-27 08:48:20.111509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.781 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:16:23.781 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # return 0 00:16:23.781 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:23.781 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:23.781 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.781 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.041 BaseBdev1_malloc 00:16:24.041 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.041 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:24.041 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.041 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.041 [2024-11-27 08:48:20.545829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:24.041 [2024-11-27 08:48:20.545932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.041 [2024-11-27 08:48:20.545966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:24.041 [2024-11-27 08:48:20.545985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.041 [2024-11-27 08:48:20.548861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.041 [2024-11-27 08:48:20.548909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:24.041 BaseBdev1 00:16:24.041 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.041 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:24.041 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:24.041 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.041 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.041 BaseBdev2_malloc 00:16:24.041 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.041 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:24.041 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.041 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.041 [2024-11-27 08:48:20.600666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:24.041 [2024-11-27 08:48:20.600745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.041 [2024-11-27 08:48:20.600774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:24.041 [2024-11-27 08:48:20.600800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.041 [2024-11-27 08:48:20.603867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.041 [2024-11-27 08:48:20.603928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:24.041 BaseBdev2 00:16:24.041 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.042 spare_malloc 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.042 spare_delay 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.042 [2024-11-27 08:48:20.674964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:24.042 [2024-11-27 08:48:20.675242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.042 [2024-11-27 08:48:20.675283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:24.042 [2024-11-27 08:48:20.675303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.042 [2024-11-27 08:48:20.678398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.042 [2024-11-27 08:48:20.678480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:24.042 spare 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.042 [2024-11-27 08:48:20.687135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.042 [2024-11-27 08:48:20.689804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.042 [2024-11-27 08:48:20.690065] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:24.042 [2024-11-27 08:48:20.690092] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:24.042 [2024-11-27 08:48:20.690430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:24.042 [2024-11-27 08:48:20.690658] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:24.042 [2024-11-27 08:48:20.690673] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:24.042 [2024-11-27 08:48:20.690857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.042 "name": "raid_bdev1", 00:16:24.042 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:24.042 "strip_size_kb": 0, 00:16:24.042 "state": "online", 00:16:24.042 "raid_level": "raid1", 00:16:24.042 "superblock": true, 00:16:24.042 "num_base_bdevs": 2, 00:16:24.042 "num_base_bdevs_discovered": 2, 00:16:24.042 "num_base_bdevs_operational": 2, 00:16:24.042 "base_bdevs_list": [ 00:16:24.042 { 00:16:24.042 "name": "BaseBdev1", 00:16:24.042 "uuid": "8829eb35-f7ad-5d65-af71-12ce010451b7", 00:16:24.042 "is_configured": true, 00:16:24.042 "data_offset": 2048, 00:16:24.042 "data_size": 63488 00:16:24.042 }, 00:16:24.042 { 00:16:24.042 "name": "BaseBdev2", 00:16:24.042 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:24.042 "is_configured": true, 00:16:24.042 "data_offset": 2048, 00:16:24.042 "data_size": 63488 00:16:24.042 } 00:16:24.042 ] 00:16:24.042 }' 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.042 08:48:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.610 [2024-11-27 08:48:21.211714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.610 [2024-11-27 08:48:21.315286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.610 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.611 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.870 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.870 "name": "raid_bdev1", 00:16:24.870 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:24.870 "strip_size_kb": 0, 00:16:24.870 "state": "online", 00:16:24.870 "raid_level": "raid1", 00:16:24.870 "superblock": true, 00:16:24.870 "num_base_bdevs": 2, 00:16:24.870 "num_base_bdevs_discovered": 1, 00:16:24.870 "num_base_bdevs_operational": 1, 00:16:24.870 "base_bdevs_list": [ 00:16:24.870 { 00:16:24.870 "name": null, 00:16:24.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.870 "is_configured": false, 00:16:24.870 "data_offset": 0, 00:16:24.870 "data_size": 63488 00:16:24.870 }, 00:16:24.870 { 00:16:24.870 "name": "BaseBdev2", 00:16:24.870 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:24.870 "is_configured": true, 00:16:24.870 "data_offset": 2048, 00:16:24.870 "data_size": 63488 00:16:24.870 } 00:16:24.870 ] 00:16:24.870 }' 00:16:24.870 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.870 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.870 [2024-11-27 08:48:21.444391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:24.870 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:24.870 Zero copy mechanism will not be used. 00:16:24.870 Running I/O for 60 seconds... 00:16:25.129 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:25.129 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.129 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.129 [2024-11-27 08:48:21.847479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:25.388 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.388 08:48:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:25.388 [2024-11-27 08:48:21.924529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:25.388 [2024-11-27 08:48:21.927336] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:25.388 [2024-11-27 08:48:22.049282] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:25.388 [2024-11-27 08:48:22.050225] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:25.648 [2024-11-27 08:48:22.262862] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:25.648 [2024-11-27 08:48:22.263590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:25.907 188.00 IOPS, 564.00 MiB/s [2024-11-27T08:48:22.668Z] [2024-11-27 08:48:22.613732] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:26.167 [2024-11-27 08:48:22.857185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:26.167 [2024-11-27 08:48:22.857777] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:26.167 08:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.167 08:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.167 08:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.167 08:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.167 08:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.167 08:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.167 08:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.167 08:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.167 08:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.426 08:48:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.426 08:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.426 "name": "raid_bdev1", 00:16:26.426 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:26.426 "strip_size_kb": 0, 00:16:26.426 "state": "online", 00:16:26.426 "raid_level": "raid1", 00:16:26.426 "superblock": true, 00:16:26.426 "num_base_bdevs": 2, 00:16:26.426 "num_base_bdevs_discovered": 2, 00:16:26.426 "num_base_bdevs_operational": 2, 00:16:26.426 "process": { 00:16:26.426 "type": "rebuild", 00:16:26.426 "target": "spare", 00:16:26.426 "progress": { 00:16:26.426 "blocks": 10240, 00:16:26.426 "percent": 16 00:16:26.426 } 00:16:26.426 }, 00:16:26.426 "base_bdevs_list": [ 00:16:26.426 { 00:16:26.426 "name": "spare", 00:16:26.426 "uuid": "75265187-67d6-52f2-b695-612c08c677eb", 00:16:26.426 "is_configured": true, 00:16:26.426 "data_offset": 2048, 00:16:26.426 "data_size": 63488 00:16:26.426 }, 00:16:26.426 { 00:16:26.426 "name": "BaseBdev2", 00:16:26.426 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:26.426 "is_configured": true, 00:16:26.426 "data_offset": 2048, 00:16:26.426 "data_size": 63488 00:16:26.426 } 00:16:26.426 ] 00:16:26.426 }' 00:16:26.426 08:48:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.426 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.426 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.426 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.426 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:26.426 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.426 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.426 [2024-11-27 08:48:23.054802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.685 [2024-11-27 08:48:23.199728] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:26.685 [2024-11-27 08:48:23.203537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.685 [2024-11-27 08:48:23.203589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.685 [2024-11-27 08:48:23.203603] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:26.685 [2024-11-27 08:48:23.252976] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:26.685 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.685 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:26.685 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.685 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.685 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.686 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.686 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:26.686 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.686 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.686 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.686 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.686 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.686 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.686 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.686 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.686 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.686 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.686 "name": "raid_bdev1", 00:16:26.686 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:26.686 "strip_size_kb": 0, 00:16:26.686 "state": "online", 00:16:26.686 "raid_level": "raid1", 00:16:26.686 "superblock": true, 00:16:26.686 "num_base_bdevs": 2, 00:16:26.686 "num_base_bdevs_discovered": 1, 00:16:26.686 "num_base_bdevs_operational": 1, 00:16:26.686 "base_bdevs_list": [ 00:16:26.686 { 00:16:26.686 "name": null, 00:16:26.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.686 "is_configured": false, 00:16:26.686 "data_offset": 0, 00:16:26.686 "data_size": 63488 00:16:26.686 }, 00:16:26.686 { 00:16:26.686 "name": "BaseBdev2", 00:16:26.686 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:26.686 "is_configured": true, 00:16:26.686 "data_offset": 2048, 00:16:26.686 "data_size": 63488 00:16:26.686 } 00:16:26.686 ] 00:16:26.686 }' 00:16:26.686 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.686 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.203 133.00 IOPS, 399.00 MiB/s [2024-11-27T08:48:23.963Z] 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.203 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.203 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.204 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.204 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.204 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.204 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.204 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.204 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.204 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.204 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.204 "name": "raid_bdev1", 00:16:27.204 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:27.204 "strip_size_kb": 0, 00:16:27.204 "state": "online", 00:16:27.204 "raid_level": "raid1", 00:16:27.204 "superblock": true, 00:16:27.204 "num_base_bdevs": 2, 00:16:27.204 "num_base_bdevs_discovered": 1, 00:16:27.204 "num_base_bdevs_operational": 1, 00:16:27.204 "base_bdevs_list": [ 00:16:27.204 { 00:16:27.204 "name": null, 00:16:27.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.204 "is_configured": false, 00:16:27.204 "data_offset": 0, 00:16:27.204 "data_size": 63488 00:16:27.204 }, 00:16:27.204 { 00:16:27.204 "name": "BaseBdev2", 00:16:27.204 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:27.204 "is_configured": true, 00:16:27.204 "data_offset": 2048, 00:16:27.204 "data_size": 63488 00:16:27.204 } 00:16:27.204 ] 00:16:27.204 }' 00:16:27.204 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.204 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.204 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.463 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:27.463 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:27.463 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.463 08:48:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.463 [2024-11-27 08:48:23.988048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.463 08:48:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.463 08:48:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:27.463 [2024-11-27 08:48:24.043708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:27.463 [2024-11-27 08:48:24.046517] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:27.463 [2024-11-27 08:48:24.149628] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:27.463 [2024-11-27 08:48:24.150501] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:27.722 [2024-11-27 08:48:24.370283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:27.722 [2024-11-27 08:48:24.370830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:28.288 142.00 IOPS, 426.00 MiB/s [2024-11-27T08:48:25.048Z] [2024-11-27 08:48:24.858071] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:28.288 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.288 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.288 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.288 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.288 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.288 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.288 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.288 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.288 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.547 "name": "raid_bdev1", 00:16:28.547 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:28.547 "strip_size_kb": 0, 00:16:28.547 "state": "online", 00:16:28.547 "raid_level": "raid1", 00:16:28.547 "superblock": true, 00:16:28.547 "num_base_bdevs": 2, 00:16:28.547 "num_base_bdevs_discovered": 2, 00:16:28.547 "num_base_bdevs_operational": 2, 00:16:28.547 "process": { 00:16:28.547 "type": "rebuild", 00:16:28.547 "target": "spare", 00:16:28.547 "progress": { 00:16:28.547 "blocks": 12288, 00:16:28.547 "percent": 19 00:16:28.547 } 00:16:28.547 }, 00:16:28.547 "base_bdevs_list": [ 00:16:28.547 { 00:16:28.547 "name": "spare", 00:16:28.547 "uuid": "75265187-67d6-52f2-b695-612c08c677eb", 00:16:28.547 "is_configured": true, 00:16:28.547 "data_offset": 2048, 00:16:28.547 "data_size": 63488 00:16:28.547 }, 00:16:28.547 { 00:16:28.547 "name": "BaseBdev2", 00:16:28.547 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:28.547 "is_configured": true, 00:16:28.547 "data_offset": 2048, 00:16:28.547 "data_size": 63488 00:16:28.547 } 00:16:28.547 ] 00:16:28.547 }' 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.547 [2024-11-27 08:48:25.090814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:28.547 [2024-11-27 08:48:25.091808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:28.547 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=459 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.547 "name": "raid_bdev1", 00:16:28.547 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:28.547 "strip_size_kb": 0, 00:16:28.547 "state": "online", 00:16:28.547 "raid_level": "raid1", 00:16:28.547 "superblock": true, 00:16:28.547 "num_base_bdevs": 2, 00:16:28.547 "num_base_bdevs_discovered": 2, 00:16:28.547 "num_base_bdevs_operational": 2, 00:16:28.547 "process": { 00:16:28.547 "type": "rebuild", 00:16:28.547 "target": "spare", 00:16:28.547 "progress": { 00:16:28.547 "blocks": 14336, 00:16:28.547 "percent": 22 00:16:28.547 } 00:16:28.547 }, 00:16:28.547 "base_bdevs_list": [ 00:16:28.547 { 00:16:28.547 "name": "spare", 00:16:28.547 "uuid": "75265187-67d6-52f2-b695-612c08c677eb", 00:16:28.547 "is_configured": true, 00:16:28.547 "data_offset": 2048, 00:16:28.547 "data_size": 63488 00:16:28.547 }, 00:16:28.547 { 00:16:28.547 "name": "BaseBdev2", 00:16:28.547 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:28.547 "is_configured": true, 00:16:28.547 "data_offset": 2048, 00:16:28.547 "data_size": 63488 00:16:28.547 } 00:16:28.547 ] 00:16:28.547 }' 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.547 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.806 [2024-11-27 08:48:25.319978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:28.806 [2024-11-27 08:48:25.320630] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:28.806 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.806 08:48:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.377 124.75 IOPS, 374.25 MiB/s [2024-11-27T08:48:26.137Z] [2024-11-27 08:48:26.101581] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:29.644 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.644 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.644 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.644 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.644 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.644 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.644 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.644 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.644 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.644 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.644 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.911 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.911 "name": "raid_bdev1", 00:16:29.911 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:29.911 "strip_size_kb": 0, 00:16:29.911 "state": "online", 00:16:29.911 "raid_level": "raid1", 00:16:29.911 "superblock": true, 00:16:29.911 "num_base_bdevs": 2, 00:16:29.911 "num_base_bdevs_discovered": 2, 00:16:29.911 "num_base_bdevs_operational": 2, 00:16:29.911 "process": { 00:16:29.911 "type": "rebuild", 00:16:29.911 "target": "spare", 00:16:29.911 "progress": { 00:16:29.911 "blocks": 28672, 00:16:29.911 "percent": 45 00:16:29.911 } 00:16:29.911 }, 00:16:29.911 "base_bdevs_list": [ 00:16:29.911 { 00:16:29.911 "name": "spare", 00:16:29.911 "uuid": "75265187-67d6-52f2-b695-612c08c677eb", 00:16:29.911 "is_configured": true, 00:16:29.911 "data_offset": 2048, 00:16:29.911 "data_size": 63488 00:16:29.911 }, 00:16:29.911 { 00:16:29.911 "name": "BaseBdev2", 00:16:29.911 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:29.911 "is_configured": true, 00:16:29.911 "data_offset": 2048, 00:16:29.911 "data_size": 63488 00:16:29.911 } 00:16:29.911 ] 00:16:29.911 }' 00:16:29.911 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.911 113.60 IOPS, 340.80 MiB/s [2024-11-27T08:48:26.671Z] 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.911 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.911 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.911 08:48:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.911 [2024-11-27 08:48:26.592439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:30.170 [2024-11-27 08:48:26.806870] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:30.170 [2024-11-27 08:48:26.926292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:30.170 [2024-11-27 08:48:26.926820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:30.995 104.83 IOPS, 314.50 MiB/s [2024-11-27T08:48:27.755Z] 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.995 "name": "raid_bdev1", 00:16:30.995 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:30.995 "strip_size_kb": 0, 00:16:30.995 "state": "online", 00:16:30.995 "raid_level": "raid1", 00:16:30.995 "superblock": true, 00:16:30.995 "num_base_bdevs": 2, 00:16:30.995 "num_base_bdevs_discovered": 2, 00:16:30.995 "num_base_bdevs_operational": 2, 00:16:30.995 "process": { 00:16:30.995 "type": "rebuild", 00:16:30.995 "target": "spare", 00:16:30.995 "progress": { 00:16:30.995 "blocks": 49152, 00:16:30.995 "percent": 77 00:16:30.995 } 00:16:30.995 }, 00:16:30.995 "base_bdevs_list": [ 00:16:30.995 { 00:16:30.995 "name": "spare", 00:16:30.995 "uuid": "75265187-67d6-52f2-b695-612c08c677eb", 00:16:30.995 "is_configured": true, 00:16:30.995 "data_offset": 2048, 00:16:30.995 "data_size": 63488 00:16:30.995 }, 00:16:30.995 { 00:16:30.995 "name": "BaseBdev2", 00:16:30.995 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:30.995 "is_configured": true, 00:16:30.995 "data_offset": 2048, 00:16:30.995 "data_size": 63488 00:16:30.995 } 00:16:30.995 ] 00:16:30.995 }' 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.995 08:48:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.564 [2024-11-27 08:48:28.260997] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:31.821 [2024-11-27 08:48:28.361031] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:31.821 [2024-11-27 08:48:28.364183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.079 94.57 IOPS, 283.71 MiB/s [2024-11-27T08:48:28.839Z] 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.079 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.079 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.079 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.079 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.079 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.079 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.079 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.079 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.079 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.079 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.079 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.079 "name": "raid_bdev1", 00:16:32.079 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:32.079 "strip_size_kb": 0, 00:16:32.079 "state": "online", 00:16:32.079 "raid_level": "raid1", 00:16:32.079 "superblock": true, 00:16:32.079 "num_base_bdevs": 2, 00:16:32.079 "num_base_bdevs_discovered": 2, 00:16:32.079 "num_base_bdevs_operational": 2, 00:16:32.079 "base_bdevs_list": [ 00:16:32.079 { 00:16:32.079 "name": "spare", 00:16:32.079 "uuid": "75265187-67d6-52f2-b695-612c08c677eb", 00:16:32.079 "is_configured": true, 00:16:32.079 "data_offset": 2048, 00:16:32.079 "data_size": 63488 00:16:32.079 }, 00:16:32.079 { 00:16:32.079 "name": "BaseBdev2", 00:16:32.079 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:32.079 "is_configured": true, 00:16:32.079 "data_offset": 2048, 00:16:32.079 "data_size": 63488 00:16:32.079 } 00:16:32.079 ] 00:16:32.079 }' 00:16:32.079 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.079 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:32.079 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.337 "name": "raid_bdev1", 00:16:32.337 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:32.337 "strip_size_kb": 0, 00:16:32.337 "state": "online", 00:16:32.337 "raid_level": "raid1", 00:16:32.337 "superblock": true, 00:16:32.337 "num_base_bdevs": 2, 00:16:32.337 "num_base_bdevs_discovered": 2, 00:16:32.337 "num_base_bdevs_operational": 2, 00:16:32.337 "base_bdevs_list": [ 00:16:32.337 { 00:16:32.337 "name": "spare", 00:16:32.337 "uuid": "75265187-67d6-52f2-b695-612c08c677eb", 00:16:32.337 "is_configured": true, 00:16:32.337 "data_offset": 2048, 00:16:32.337 "data_size": 63488 00:16:32.337 }, 00:16:32.337 { 00:16:32.337 "name": "BaseBdev2", 00:16:32.337 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:32.337 "is_configured": true, 00:16:32.337 "data_offset": 2048, 00:16:32.337 "data_size": 63488 00:16:32.337 } 00:16:32.337 ] 00:16:32.337 }' 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:32.337 08:48:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.337 "name": "raid_bdev1", 00:16:32.337 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:32.337 "strip_size_kb": 0, 00:16:32.337 "state": "online", 00:16:32.337 "raid_level": "raid1", 00:16:32.337 "superblock": true, 00:16:32.337 "num_base_bdevs": 2, 00:16:32.337 "num_base_bdevs_discovered": 2, 00:16:32.337 "num_base_bdevs_operational": 2, 00:16:32.337 "base_bdevs_list": [ 00:16:32.337 { 00:16:32.337 "name": "spare", 00:16:32.337 "uuid": "75265187-67d6-52f2-b695-612c08c677eb", 00:16:32.337 "is_configured": true, 00:16:32.337 "data_offset": 2048, 00:16:32.337 "data_size": 63488 00:16:32.337 }, 00:16:32.337 { 00:16:32.337 "name": "BaseBdev2", 00:16:32.337 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:32.337 "is_configured": true, 00:16:32.337 "data_offset": 2048, 00:16:32.337 "data_size": 63488 00:16:32.337 } 00:16:32.337 ] 00:16:32.337 }' 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.337 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.901 87.12 IOPS, 261.38 MiB/s [2024-11-27T08:48:29.661Z] 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.901 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.901 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.901 [2024-11-27 08:48:29.536098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.901 [2024-11-27 08:48:29.536141] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.901 00:16:32.901 Latency(us) 00:16:32.901 [2024-11-27T08:48:29.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.901 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:32.901 raid_bdev1 : 8.18 85.69 257.07 0.00 0.00 15435.77 269.96 112483.61 00:16:32.901 [2024-11-27T08:48:29.661Z] =================================================================================================================== 00:16:32.901 [2024-11-27T08:48:29.661Z] Total : 85.69 257.07 0.00 0.00 15435.77 269.96 112483.61 00:16:32.901 { 00:16:32.901 "results": [ 00:16:32.901 { 00:16:32.901 "job": "raid_bdev1", 00:16:32.901 "core_mask": "0x1", 00:16:32.901 "workload": "randrw", 00:16:32.901 "percentage": 50, 00:16:32.901 "status": "finished", 00:16:32.901 "queue_depth": 2, 00:16:32.901 "io_size": 3145728, 00:16:32.901 "runtime": 8.180539, 00:16:32.901 "iops": 85.69117511694523, 00:16:32.901 "mibps": 257.07352535083567, 00:16:32.901 "io_failed": 0, 00:16:32.901 "io_timeout": 0, 00:16:32.901 "avg_latency_us": 15435.76882894566, 00:16:32.901 "min_latency_us": 269.96363636363634, 00:16:32.901 "max_latency_us": 112483.60727272727 00:16:32.901 } 00:16:32.901 ], 00:16:32.901 "core_count": 1 00:16:32.901 } 00:16:32.901 [2024-11-27 08:48:29.648950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.901 [2024-11-27 08:48:29.649041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.901 [2024-11-27 08:48:29.649177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.901 [2024-11-27 08:48:29.649205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:32.901 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.193 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.193 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:33.193 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.193 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.193 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.193 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:33.193 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:33.193 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:33.193 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:33.194 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.194 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:33.194 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:33.194 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:33.194 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:33.194 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:33.194 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:33.194 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:33.194 08:48:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:33.451 /dev/nbd0 00:16:33.451 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:33.451 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:33.451 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:16:33.451 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local i 00:16:33.451 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:16:33.451 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:16:33.451 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:16:33.451 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # break 00:16:33.451 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:16:33.451 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:16:33.451 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:33.451 1+0 records in 00:16:33.451 1+0 records out 00:16:33.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350492 s, 11.7 MB/s 00:16:33.451 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.451 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # size=4096 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # return 0 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:33.452 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:33.709 /dev/nbd1 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local i 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # break 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:33.709 1+0 records in 00:16:33.709 1+0 records out 00:16:33.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432764 s, 9.5 MB/s 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # size=4096 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # return 0 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:33.709 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:33.967 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:33.967 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.968 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:33.968 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:33.968 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:33.968 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:33.968 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:34.226 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:34.226 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:34.226 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:34.226 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:34.226 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:34.226 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:34.226 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:34.226 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:34.226 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:34.226 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:34.226 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:34.226 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:34.226 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:34.226 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:34.226 08:48:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.484 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.484 [2024-11-27 08:48:31.197509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:34.484 [2024-11-27 08:48:31.197593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.484 [2024-11-27 08:48:31.197628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:34.485 [2024-11-27 08:48:31.197656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.485 [2024-11-27 08:48:31.200952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.485 [2024-11-27 08:48:31.201017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:34.485 [2024-11-27 08:48:31.201145] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:34.485 [2024-11-27 08:48:31.201223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.485 [2024-11-27 08:48:31.201454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.485 spare 00:16:34.485 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.485 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:34.485 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.485 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.742 [2024-11-27 08:48:31.301684] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:34.742 [2024-11-27 08:48:31.301941] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:34.743 [2024-11-27 08:48:31.302495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:16:34.743 [2024-11-27 08:48:31.302772] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:34.743 [2024-11-27 08:48:31.302809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:34.743 [2024-11-27 08:48:31.303081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.743 "name": "raid_bdev1", 00:16:34.743 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:34.743 "strip_size_kb": 0, 00:16:34.743 "state": "online", 00:16:34.743 "raid_level": "raid1", 00:16:34.743 "superblock": true, 00:16:34.743 "num_base_bdevs": 2, 00:16:34.743 "num_base_bdevs_discovered": 2, 00:16:34.743 "num_base_bdevs_operational": 2, 00:16:34.743 "base_bdevs_list": [ 00:16:34.743 { 00:16:34.743 "name": "spare", 00:16:34.743 "uuid": "75265187-67d6-52f2-b695-612c08c677eb", 00:16:34.743 "is_configured": true, 00:16:34.743 "data_offset": 2048, 00:16:34.743 "data_size": 63488 00:16:34.743 }, 00:16:34.743 { 00:16:34.743 "name": "BaseBdev2", 00:16:34.743 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:34.743 "is_configured": true, 00:16:34.743 "data_offset": 2048, 00:16:34.743 "data_size": 63488 00:16:34.743 } 00:16:34.743 ] 00:16:34.743 }' 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.743 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.309 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.309 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.309 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.310 "name": "raid_bdev1", 00:16:35.310 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:35.310 "strip_size_kb": 0, 00:16:35.310 "state": "online", 00:16:35.310 "raid_level": "raid1", 00:16:35.310 "superblock": true, 00:16:35.310 "num_base_bdevs": 2, 00:16:35.310 "num_base_bdevs_discovered": 2, 00:16:35.310 "num_base_bdevs_operational": 2, 00:16:35.310 "base_bdevs_list": [ 00:16:35.310 { 00:16:35.310 "name": "spare", 00:16:35.310 "uuid": "75265187-67d6-52f2-b695-612c08c677eb", 00:16:35.310 "is_configured": true, 00:16:35.310 "data_offset": 2048, 00:16:35.310 "data_size": 63488 00:16:35.310 }, 00:16:35.310 { 00:16:35.310 "name": "BaseBdev2", 00:16:35.310 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:35.310 "is_configured": true, 00:16:35.310 "data_offset": 2048, 00:16:35.310 "data_size": 63488 00:16:35.310 } 00:16:35.310 ] 00:16:35.310 }' 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.310 08:48:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.310 [2024-11-27 08:48:32.030033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.310 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.568 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.568 "name": "raid_bdev1", 00:16:35.568 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:35.568 "strip_size_kb": 0, 00:16:35.568 "state": "online", 00:16:35.568 "raid_level": "raid1", 00:16:35.568 "superblock": true, 00:16:35.568 "num_base_bdevs": 2, 00:16:35.568 "num_base_bdevs_discovered": 1, 00:16:35.568 "num_base_bdevs_operational": 1, 00:16:35.568 "base_bdevs_list": [ 00:16:35.568 { 00:16:35.568 "name": null, 00:16:35.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.568 "is_configured": false, 00:16:35.568 "data_offset": 0, 00:16:35.568 "data_size": 63488 00:16:35.568 }, 00:16:35.568 { 00:16:35.568 "name": "BaseBdev2", 00:16:35.568 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:35.568 "is_configured": true, 00:16:35.568 "data_offset": 2048, 00:16:35.568 "data_size": 63488 00:16:35.568 } 00:16:35.568 ] 00:16:35.568 }' 00:16:35.568 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.568 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.826 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:35.826 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.826 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.826 [2024-11-27 08:48:32.554386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.826 [2024-11-27 08:48:32.554693] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:35.826 [2024-11-27 08:48:32.554716] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:35.826 [2024-11-27 08:48:32.554783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.826 [2024-11-27 08:48:32.573205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:16:35.826 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.826 08:48:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:35.826 [2024-11-27 08:48:32.576093] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:37.202 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.202 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.202 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.202 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.202 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.202 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.202 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.202 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.202 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.202 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.202 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.202 "name": "raid_bdev1", 00:16:37.202 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:37.202 "strip_size_kb": 0, 00:16:37.202 "state": "online", 00:16:37.202 "raid_level": "raid1", 00:16:37.202 "superblock": true, 00:16:37.202 "num_base_bdevs": 2, 00:16:37.202 "num_base_bdevs_discovered": 2, 00:16:37.202 "num_base_bdevs_operational": 2, 00:16:37.202 "process": { 00:16:37.202 "type": "rebuild", 00:16:37.202 "target": "spare", 00:16:37.202 "progress": { 00:16:37.202 "blocks": 20480, 00:16:37.202 "percent": 32 00:16:37.202 } 00:16:37.202 }, 00:16:37.202 "base_bdevs_list": [ 00:16:37.202 { 00:16:37.202 "name": "spare", 00:16:37.203 "uuid": "75265187-67d6-52f2-b695-612c08c677eb", 00:16:37.203 "is_configured": true, 00:16:37.203 "data_offset": 2048, 00:16:37.203 "data_size": 63488 00:16:37.203 }, 00:16:37.203 { 00:16:37.203 "name": "BaseBdev2", 00:16:37.203 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:37.203 "is_configured": true, 00:16:37.203 "data_offset": 2048, 00:16:37.203 "data_size": 63488 00:16:37.203 } 00:16:37.203 ] 00:16:37.203 }' 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.203 [2024-11-27 08:48:33.754133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.203 [2024-11-27 08:48:33.787816] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:37.203 [2024-11-27 08:48:33.788070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.203 [2024-11-27 08:48:33.788105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.203 [2024-11-27 08:48:33.788118] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.203 "name": "raid_bdev1", 00:16:37.203 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:37.203 "strip_size_kb": 0, 00:16:37.203 "state": "online", 00:16:37.203 "raid_level": "raid1", 00:16:37.203 "superblock": true, 00:16:37.203 "num_base_bdevs": 2, 00:16:37.203 "num_base_bdevs_discovered": 1, 00:16:37.203 "num_base_bdevs_operational": 1, 00:16:37.203 "base_bdevs_list": [ 00:16:37.203 { 00:16:37.203 "name": null, 00:16:37.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.203 "is_configured": false, 00:16:37.203 "data_offset": 0, 00:16:37.203 "data_size": 63488 00:16:37.203 }, 00:16:37.203 { 00:16:37.203 "name": "BaseBdev2", 00:16:37.203 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:37.203 "is_configured": true, 00:16:37.203 "data_offset": 2048, 00:16:37.203 "data_size": 63488 00:16:37.203 } 00:16:37.203 ] 00:16:37.203 }' 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.203 08:48:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.769 08:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:37.769 08:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.769 08:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.769 [2024-11-27 08:48:34.325529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:37.769 [2024-11-27 08:48:34.325762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.769 [2024-11-27 08:48:34.325850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:37.769 [2024-11-27 08:48:34.326103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.769 [2024-11-27 08:48:34.326845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.769 [2024-11-27 08:48:34.326882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:37.769 [2024-11-27 08:48:34.327031] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:37.769 [2024-11-27 08:48:34.327054] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:37.769 [2024-11-27 08:48:34.327071] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:37.769 [2024-11-27 08:48:34.327101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.769 [2024-11-27 08:48:34.344803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:16:37.769 spare 00:16:37.769 08:48:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.769 08:48:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:37.770 [2024-11-27 08:48:34.347481] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:38.728 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.728 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.728 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.728 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.728 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.728 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.728 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.728 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.728 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.728 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.728 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.728 "name": "raid_bdev1", 00:16:38.728 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:38.728 "strip_size_kb": 0, 00:16:38.728 "state": "online", 00:16:38.728 "raid_level": "raid1", 00:16:38.728 "superblock": true, 00:16:38.728 "num_base_bdevs": 2, 00:16:38.728 "num_base_bdevs_discovered": 2, 00:16:38.728 "num_base_bdevs_operational": 2, 00:16:38.728 "process": { 00:16:38.728 "type": "rebuild", 00:16:38.728 "target": "spare", 00:16:38.729 "progress": { 00:16:38.729 "blocks": 20480, 00:16:38.729 "percent": 32 00:16:38.729 } 00:16:38.729 }, 00:16:38.729 "base_bdevs_list": [ 00:16:38.729 { 00:16:38.729 "name": "spare", 00:16:38.729 "uuid": "75265187-67d6-52f2-b695-612c08c677eb", 00:16:38.729 "is_configured": true, 00:16:38.729 "data_offset": 2048, 00:16:38.729 "data_size": 63488 00:16:38.729 }, 00:16:38.729 { 00:16:38.729 "name": "BaseBdev2", 00:16:38.729 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:38.729 "is_configured": true, 00:16:38.729 "data_offset": 2048, 00:16:38.729 "data_size": 63488 00:16:38.729 } 00:16:38.729 ] 00:16:38.729 }' 00:16:38.729 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.729 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.729 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.987 [2024-11-27 08:48:35.505585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.987 [2024-11-27 08:48:35.558679] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:38.987 [2024-11-27 08:48:35.558955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.987 [2024-11-27 08:48:35.559097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.987 [2024-11-27 08:48:35.559155] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.987 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.988 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.988 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.988 "name": "raid_bdev1", 00:16:38.988 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:38.988 "strip_size_kb": 0, 00:16:38.988 "state": "online", 00:16:38.988 "raid_level": "raid1", 00:16:38.988 "superblock": true, 00:16:38.988 "num_base_bdevs": 2, 00:16:38.988 "num_base_bdevs_discovered": 1, 00:16:38.988 "num_base_bdevs_operational": 1, 00:16:38.988 "base_bdevs_list": [ 00:16:38.988 { 00:16:38.988 "name": null, 00:16:38.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.988 "is_configured": false, 00:16:38.988 "data_offset": 0, 00:16:38.988 "data_size": 63488 00:16:38.988 }, 00:16:38.988 { 00:16:38.988 "name": "BaseBdev2", 00:16:38.988 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:38.988 "is_configured": true, 00:16:38.988 "data_offset": 2048, 00:16:38.988 "data_size": 63488 00:16:38.988 } 00:16:38.988 ] 00:16:38.988 }' 00:16:38.988 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.988 08:48:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.555 "name": "raid_bdev1", 00:16:39.555 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:39.555 "strip_size_kb": 0, 00:16:39.555 "state": "online", 00:16:39.555 "raid_level": "raid1", 00:16:39.555 "superblock": true, 00:16:39.555 "num_base_bdevs": 2, 00:16:39.555 "num_base_bdevs_discovered": 1, 00:16:39.555 "num_base_bdevs_operational": 1, 00:16:39.555 "base_bdevs_list": [ 00:16:39.555 { 00:16:39.555 "name": null, 00:16:39.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.555 "is_configured": false, 00:16:39.555 "data_offset": 0, 00:16:39.555 "data_size": 63488 00:16:39.555 }, 00:16:39.555 { 00:16:39.555 "name": "BaseBdev2", 00:16:39.555 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:39.555 "is_configured": true, 00:16:39.555 "data_offset": 2048, 00:16:39.555 "data_size": 63488 00:16:39.555 } 00:16:39.555 ] 00:16:39.555 }' 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.555 [2024-11-27 08:48:36.280442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:39.555 [2024-11-27 08:48:36.280533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.555 [2024-11-27 08:48:36.280565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:39.555 [2024-11-27 08:48:36.280587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.555 [2024-11-27 08:48:36.281254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.555 [2024-11-27 08:48:36.281302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:39.555 [2024-11-27 08:48:36.281421] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:39.555 [2024-11-27 08:48:36.281453] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:39.555 [2024-11-27 08:48:36.281466] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:39.555 [2024-11-27 08:48:36.281484] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:39.555 BaseBdev1 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.555 08:48:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.931 "name": "raid_bdev1", 00:16:40.931 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:40.931 "strip_size_kb": 0, 00:16:40.931 "state": "online", 00:16:40.931 "raid_level": "raid1", 00:16:40.931 "superblock": true, 00:16:40.931 "num_base_bdevs": 2, 00:16:40.931 "num_base_bdevs_discovered": 1, 00:16:40.931 "num_base_bdevs_operational": 1, 00:16:40.931 "base_bdevs_list": [ 00:16:40.931 { 00:16:40.931 "name": null, 00:16:40.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.931 "is_configured": false, 00:16:40.931 "data_offset": 0, 00:16:40.931 "data_size": 63488 00:16:40.931 }, 00:16:40.931 { 00:16:40.931 "name": "BaseBdev2", 00:16:40.931 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:40.931 "is_configured": true, 00:16:40.931 "data_offset": 2048, 00:16:40.931 "data_size": 63488 00:16:40.931 } 00:16:40.931 ] 00:16:40.931 }' 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.931 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.190 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.190 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.190 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.190 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.190 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.190 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.190 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.190 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.190 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.190 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.190 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.190 "name": "raid_bdev1", 00:16:41.190 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:41.190 "strip_size_kb": 0, 00:16:41.190 "state": "online", 00:16:41.190 "raid_level": "raid1", 00:16:41.190 "superblock": true, 00:16:41.190 "num_base_bdevs": 2, 00:16:41.190 "num_base_bdevs_discovered": 1, 00:16:41.190 "num_base_bdevs_operational": 1, 00:16:41.190 "base_bdevs_list": [ 00:16:41.190 { 00:16:41.190 "name": null, 00:16:41.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.190 "is_configured": false, 00:16:41.190 "data_offset": 0, 00:16:41.190 "data_size": 63488 00:16:41.190 }, 00:16:41.190 { 00:16:41.190 "name": "BaseBdev2", 00:16:41.190 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:41.190 "is_configured": true, 00:16:41.190 "data_offset": 2048, 00:16:41.190 "data_size": 63488 00:16:41.190 } 00:16:41.190 ] 00:16:41.190 }' 00:16:41.190 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.190 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.190 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.459 [2024-11-27 08:48:37.981213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.459 [2024-11-27 08:48:37.981504] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:41.459 [2024-11-27 08:48:37.981525] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:41.459 request: 00:16:41.459 { 00:16:41.459 "base_bdev": "BaseBdev1", 00:16:41.459 "raid_bdev": "raid_bdev1", 00:16:41.459 "method": "bdev_raid_add_base_bdev", 00:16:41.459 "req_id": 1 00:16:41.459 } 00:16:41.459 Got JSON-RPC error response 00:16:41.459 response: 00:16:41.459 { 00:16:41.459 "code": -22, 00:16:41.459 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:41.459 } 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:41.459 08:48:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:42.406 08:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:42.406 08:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.406 08:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.406 08:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.406 08:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.406 08:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:42.406 08:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.406 08:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.406 08:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.406 08:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.406 08:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.406 08:48:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.406 08:48:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.406 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.406 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.406 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.406 "name": "raid_bdev1", 00:16:42.406 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:42.406 "strip_size_kb": 0, 00:16:42.406 "state": "online", 00:16:42.406 "raid_level": "raid1", 00:16:42.406 "superblock": true, 00:16:42.406 "num_base_bdevs": 2, 00:16:42.406 "num_base_bdevs_discovered": 1, 00:16:42.406 "num_base_bdevs_operational": 1, 00:16:42.406 "base_bdevs_list": [ 00:16:42.406 { 00:16:42.406 "name": null, 00:16:42.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.406 "is_configured": false, 00:16:42.406 "data_offset": 0, 00:16:42.406 "data_size": 63488 00:16:42.406 }, 00:16:42.406 { 00:16:42.406 "name": "BaseBdev2", 00:16:42.406 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:42.406 "is_configured": true, 00:16:42.406 "data_offset": 2048, 00:16:42.406 "data_size": 63488 00:16:42.406 } 00:16:42.406 ] 00:16:42.406 }' 00:16:42.406 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.406 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.972 "name": "raid_bdev1", 00:16:42.972 "uuid": "f2f2fcca-5267-4659-b870-ac90151c11a2", 00:16:42.972 "strip_size_kb": 0, 00:16:42.972 "state": "online", 00:16:42.972 "raid_level": "raid1", 00:16:42.972 "superblock": true, 00:16:42.972 "num_base_bdevs": 2, 00:16:42.972 "num_base_bdevs_discovered": 1, 00:16:42.972 "num_base_bdevs_operational": 1, 00:16:42.972 "base_bdevs_list": [ 00:16:42.972 { 00:16:42.972 "name": null, 00:16:42.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.972 "is_configured": false, 00:16:42.972 "data_offset": 0, 00:16:42.972 "data_size": 63488 00:16:42.972 }, 00:16:42.972 { 00:16:42.972 "name": "BaseBdev2", 00:16:42.972 "uuid": "00e8cc68-2e0e-5e9b-9122-e950f5fad12c", 00:16:42.972 "is_configured": true, 00:16:42.972 "data_offset": 2048, 00:16:42.972 "data_size": 63488 00:16:42.972 } 00:16:42.972 ] 00:16:42.972 }' 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77230 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # '[' -z 77230 ']' 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # kill -0 77230 00:16:42.972 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # uname 00:16:42.973 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:16:42.973 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 77230 00:16:42.973 killing process with pid 77230 00:16:42.973 Received shutdown signal, test time was about 18.268930 seconds 00:16:42.973 00:16:42.973 Latency(us) 00:16:42.973 [2024-11-27T08:48:39.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.973 [2024-11-27T08:48:39.733Z] =================================================================================================================== 00:16:42.973 [2024-11-27T08:48:39.733Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.973 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:16:42.973 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:16:42.973 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # echo 'killing process with pid 77230' 00:16:42.973 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # kill 77230 00:16:42.973 [2024-11-27 08:48:39.716440] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:42.973 08:48:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@975 -- # wait 77230 00:16:42.973 [2024-11-27 08:48:39.716644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.973 [2024-11-27 08:48:39.716731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.973 [2024-11-27 08:48:39.716747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:43.230 [2024-11-27 08:48:39.920249] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:44.604 ************************************ 00:16:44.604 END TEST raid_rebuild_test_sb_io 00:16:44.604 ************************************ 00:16:44.604 00:16:44.604 real 0m21.622s 00:16:44.604 user 0m29.302s 00:16:44.604 sys 0m2.080s 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # xtrace_disable 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.604 08:48:41 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:16:44.604 08:48:41 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:16:44.604 08:48:41 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:16:44.604 08:48:41 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:16:44.604 08:48:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.604 ************************************ 00:16:44.604 START TEST raid_rebuild_test 00:16:44.604 ************************************ 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # raid_rebuild_test raid1 4 false false true 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77931 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77931 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@832 -- # '[' -z 77931 ']' 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:16:44.604 08:48:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.604 [2024-11-27 08:48:41.236706] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:16:44.604 [2024-11-27 08:48:41.237224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77931 ] 00:16:44.604 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:44.604 Zero copy mechanism will not be used. 00:16:44.861 [2024-11-27 08:48:41.424167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.861 [2024-11-27 08:48:41.566130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.119 [2024-11-27 08:48:41.790440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.119 [2024-11-27 08:48:41.790520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.684 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:16:45.684 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # return 0 00:16:45.684 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.684 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:45.684 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.684 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.684 BaseBdev1_malloc 00:16:45.684 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.684 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:45.684 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.684 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.684 [2024-11-27 08:48:42.257766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:45.684 [2024-11-27 08:48:42.257865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.684 [2024-11-27 08:48:42.257899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:45.684 [2024-11-27 08:48:42.257919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.684 [2024-11-27 08:48:42.260947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.684 [2024-11-27 08:48:42.261012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:45.684 BaseBdev1 00:16:45.684 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.684 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.684 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.685 BaseBdev2_malloc 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.685 [2024-11-27 08:48:42.316233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:45.685 [2024-11-27 08:48:42.316496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.685 [2024-11-27 08:48:42.316570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:45.685 [2024-11-27 08:48:42.316746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.685 [2024-11-27 08:48:42.319764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.685 [2024-11-27 08:48:42.319998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:45.685 BaseBdev2 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.685 BaseBdev3_malloc 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.685 [2024-11-27 08:48:42.389280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:45.685 [2024-11-27 08:48:42.389411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.685 [2024-11-27 08:48:42.389445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:45.685 [2024-11-27 08:48:42.389470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.685 [2024-11-27 08:48:42.392442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.685 [2024-11-27 08:48:42.392507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:45.685 BaseBdev3 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.685 BaseBdev4_malloc 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.685 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.943 [2024-11-27 08:48:42.443604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:45.943 [2024-11-27 08:48:42.443671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.943 [2024-11-27 08:48:42.443698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:45.943 [2024-11-27 08:48:42.443731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.943 [2024-11-27 08:48:42.446796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.943 [2024-11-27 08:48:42.447041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:45.943 BaseBdev4 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.943 spare_malloc 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.943 spare_delay 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.943 [2024-11-27 08:48:42.510317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:45.943 [2024-11-27 08:48:42.510411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.943 [2024-11-27 08:48:42.510441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:45.943 [2024-11-27 08:48:42.510461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.943 [2024-11-27 08:48:42.513575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.943 [2024-11-27 08:48:42.513627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:45.943 spare 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.943 [2024-11-27 08:48:42.522482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.943 [2024-11-27 08:48:42.525291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.943 [2024-11-27 08:48:42.525562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.943 [2024-11-27 08:48:42.525772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:45.943 [2024-11-27 08:48:42.526035] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:45.943 [2024-11-27 08:48:42.526157] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:45.943 [2024-11-27 08:48:42.526564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:45.943 [2024-11-27 08:48:42.526942] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:45.943 [2024-11-27 08:48:42.527063] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:45.943 [2024-11-27 08:48:42.527456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.943 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.944 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.944 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.944 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.944 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.944 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.944 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.944 "name": "raid_bdev1", 00:16:45.944 "uuid": "90697d0e-1b1f-45d8-a5a6-33b9f37c4061", 00:16:45.944 "strip_size_kb": 0, 00:16:45.944 "state": "online", 00:16:45.944 "raid_level": "raid1", 00:16:45.944 "superblock": false, 00:16:45.944 "num_base_bdevs": 4, 00:16:45.944 "num_base_bdevs_discovered": 4, 00:16:45.944 "num_base_bdevs_operational": 4, 00:16:45.944 "base_bdevs_list": [ 00:16:45.944 { 00:16:45.944 "name": "BaseBdev1", 00:16:45.944 "uuid": "48501561-d928-520a-af43-19c2d591cb1c", 00:16:45.944 "is_configured": true, 00:16:45.944 "data_offset": 0, 00:16:45.944 "data_size": 65536 00:16:45.944 }, 00:16:45.944 { 00:16:45.944 "name": "BaseBdev2", 00:16:45.944 "uuid": "3eea0ce0-c9cd-5507-9f2e-2d4996f31fbb", 00:16:45.944 "is_configured": true, 00:16:45.944 "data_offset": 0, 00:16:45.944 "data_size": 65536 00:16:45.944 }, 00:16:45.944 { 00:16:45.944 "name": "BaseBdev3", 00:16:45.944 "uuid": "abb397a4-70e2-560f-9b98-95066537f57a", 00:16:45.944 "is_configured": true, 00:16:45.944 "data_offset": 0, 00:16:45.944 "data_size": 65536 00:16:45.944 }, 00:16:45.944 { 00:16:45.944 "name": "BaseBdev4", 00:16:45.944 "uuid": "77202cb9-f8a9-5032-b490-29d09bf704ac", 00:16:45.944 "is_configured": true, 00:16:45.944 "data_offset": 0, 00:16:45.944 "data_size": 65536 00:16:45.944 } 00:16:45.944 ] 00:16:45.944 }' 00:16:45.944 08:48:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.944 08:48:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.509 08:48:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.510 [2024-11-27 08:48:43.047998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:46.510 08:48:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:46.767 [2024-11-27 08:48:43.435733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:46.767 /dev/nbd0 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local i 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # break 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:46.767 1+0 records in 00:16:46.767 1+0 records out 00:16:46.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296162 s, 13.8 MB/s 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # size=4096 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # return 0 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:46.767 08:48:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:56.788 65536+0 records in 00:16:56.788 65536+0 records out 00:16:56.788 33554432 bytes (34 MB, 32 MiB) copied, 8.21494 s, 4.1 MB/s 00:16:56.788 08:48:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:56.788 08:48:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.788 08:48:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:56.788 08:48:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:56.788 08:48:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:56.789 08:48:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:56.789 08:48:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:56.789 [2024-11-27 08:48:51.994411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.789 [2024-11-27 08:48:52.026502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.789 "name": "raid_bdev1", 00:16:56.789 "uuid": "90697d0e-1b1f-45d8-a5a6-33b9f37c4061", 00:16:56.789 "strip_size_kb": 0, 00:16:56.789 "state": "online", 00:16:56.789 "raid_level": "raid1", 00:16:56.789 "superblock": false, 00:16:56.789 "num_base_bdevs": 4, 00:16:56.789 "num_base_bdevs_discovered": 3, 00:16:56.789 "num_base_bdevs_operational": 3, 00:16:56.789 "base_bdevs_list": [ 00:16:56.789 { 00:16:56.789 "name": null, 00:16:56.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.789 "is_configured": false, 00:16:56.789 "data_offset": 0, 00:16:56.789 "data_size": 65536 00:16:56.789 }, 00:16:56.789 { 00:16:56.789 "name": "BaseBdev2", 00:16:56.789 "uuid": "3eea0ce0-c9cd-5507-9f2e-2d4996f31fbb", 00:16:56.789 "is_configured": true, 00:16:56.789 "data_offset": 0, 00:16:56.789 "data_size": 65536 00:16:56.789 }, 00:16:56.789 { 00:16:56.789 "name": "BaseBdev3", 00:16:56.789 "uuid": "abb397a4-70e2-560f-9b98-95066537f57a", 00:16:56.789 "is_configured": true, 00:16:56.789 "data_offset": 0, 00:16:56.789 "data_size": 65536 00:16:56.789 }, 00:16:56.789 { 00:16:56.789 "name": "BaseBdev4", 00:16:56.789 "uuid": "77202cb9-f8a9-5032-b490-29d09bf704ac", 00:16:56.789 "is_configured": true, 00:16:56.789 "data_offset": 0, 00:16:56.789 "data_size": 65536 00:16:56.789 } 00:16:56.789 ] 00:16:56.789 }' 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.789 [2024-11-27 08:48:52.518681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:56.789 [2024-11-27 08:48:52.534120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.789 08:48:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:56.789 [2024-11-27 08:48:52.536950] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:56.789 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.789 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.789 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.789 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.789 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.789 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.789 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.789 08:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.789 08:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.048 "name": "raid_bdev1", 00:16:57.048 "uuid": "90697d0e-1b1f-45d8-a5a6-33b9f37c4061", 00:16:57.048 "strip_size_kb": 0, 00:16:57.048 "state": "online", 00:16:57.048 "raid_level": "raid1", 00:16:57.048 "superblock": false, 00:16:57.048 "num_base_bdevs": 4, 00:16:57.048 "num_base_bdevs_discovered": 4, 00:16:57.048 "num_base_bdevs_operational": 4, 00:16:57.048 "process": { 00:16:57.048 "type": "rebuild", 00:16:57.048 "target": "spare", 00:16:57.048 "progress": { 00:16:57.048 "blocks": 20480, 00:16:57.048 "percent": 31 00:16:57.048 } 00:16:57.048 }, 00:16:57.048 "base_bdevs_list": [ 00:16:57.048 { 00:16:57.048 "name": "spare", 00:16:57.048 "uuid": "aae7cd26-b8ed-5476-9315-0e3e2c1dff44", 00:16:57.048 "is_configured": true, 00:16:57.048 "data_offset": 0, 00:16:57.048 "data_size": 65536 00:16:57.048 }, 00:16:57.048 { 00:16:57.048 "name": "BaseBdev2", 00:16:57.048 "uuid": "3eea0ce0-c9cd-5507-9f2e-2d4996f31fbb", 00:16:57.048 "is_configured": true, 00:16:57.048 "data_offset": 0, 00:16:57.048 "data_size": 65536 00:16:57.048 }, 00:16:57.048 { 00:16:57.048 "name": "BaseBdev3", 00:16:57.048 "uuid": "abb397a4-70e2-560f-9b98-95066537f57a", 00:16:57.048 "is_configured": true, 00:16:57.048 "data_offset": 0, 00:16:57.048 "data_size": 65536 00:16:57.048 }, 00:16:57.048 { 00:16:57.048 "name": "BaseBdev4", 00:16:57.048 "uuid": "77202cb9-f8a9-5032-b490-29d09bf704ac", 00:16:57.048 "is_configured": true, 00:16:57.048 "data_offset": 0, 00:16:57.048 "data_size": 65536 00:16:57.048 } 00:16:57.048 ] 00:16:57.048 }' 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.048 [2024-11-27 08:48:53.690416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.048 [2024-11-27 08:48:53.749008] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:57.048 [2024-11-27 08:48:53.749109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.048 [2024-11-27 08:48:53.749138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.048 [2024-11-27 08:48:53.749155] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.048 08:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.306 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.306 "name": "raid_bdev1", 00:16:57.306 "uuid": "90697d0e-1b1f-45d8-a5a6-33b9f37c4061", 00:16:57.306 "strip_size_kb": 0, 00:16:57.306 "state": "online", 00:16:57.306 "raid_level": "raid1", 00:16:57.306 "superblock": false, 00:16:57.306 "num_base_bdevs": 4, 00:16:57.306 "num_base_bdevs_discovered": 3, 00:16:57.306 "num_base_bdevs_operational": 3, 00:16:57.306 "base_bdevs_list": [ 00:16:57.306 { 00:16:57.306 "name": null, 00:16:57.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.306 "is_configured": false, 00:16:57.306 "data_offset": 0, 00:16:57.306 "data_size": 65536 00:16:57.306 }, 00:16:57.306 { 00:16:57.306 "name": "BaseBdev2", 00:16:57.306 "uuid": "3eea0ce0-c9cd-5507-9f2e-2d4996f31fbb", 00:16:57.306 "is_configured": true, 00:16:57.306 "data_offset": 0, 00:16:57.306 "data_size": 65536 00:16:57.306 }, 00:16:57.306 { 00:16:57.306 "name": "BaseBdev3", 00:16:57.306 "uuid": "abb397a4-70e2-560f-9b98-95066537f57a", 00:16:57.306 "is_configured": true, 00:16:57.306 "data_offset": 0, 00:16:57.306 "data_size": 65536 00:16:57.306 }, 00:16:57.306 { 00:16:57.306 "name": "BaseBdev4", 00:16:57.306 "uuid": "77202cb9-f8a9-5032-b490-29d09bf704ac", 00:16:57.306 "is_configured": true, 00:16:57.306 "data_offset": 0, 00:16:57.306 "data_size": 65536 00:16:57.306 } 00:16:57.306 ] 00:16:57.306 }' 00:16:57.306 08:48:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.306 08:48:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.566 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:57.566 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.566 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:57.566 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:57.566 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.566 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.566 08:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.566 08:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.566 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.566 08:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.826 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.826 "name": "raid_bdev1", 00:16:57.826 "uuid": "90697d0e-1b1f-45d8-a5a6-33b9f37c4061", 00:16:57.826 "strip_size_kb": 0, 00:16:57.826 "state": "online", 00:16:57.826 "raid_level": "raid1", 00:16:57.826 "superblock": false, 00:16:57.826 "num_base_bdevs": 4, 00:16:57.826 "num_base_bdevs_discovered": 3, 00:16:57.826 "num_base_bdevs_operational": 3, 00:16:57.826 "base_bdevs_list": [ 00:16:57.826 { 00:16:57.826 "name": null, 00:16:57.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.826 "is_configured": false, 00:16:57.826 "data_offset": 0, 00:16:57.826 "data_size": 65536 00:16:57.826 }, 00:16:57.826 { 00:16:57.826 "name": "BaseBdev2", 00:16:57.826 "uuid": "3eea0ce0-c9cd-5507-9f2e-2d4996f31fbb", 00:16:57.826 "is_configured": true, 00:16:57.826 "data_offset": 0, 00:16:57.826 "data_size": 65536 00:16:57.826 }, 00:16:57.826 { 00:16:57.826 "name": "BaseBdev3", 00:16:57.826 "uuid": "abb397a4-70e2-560f-9b98-95066537f57a", 00:16:57.826 "is_configured": true, 00:16:57.826 "data_offset": 0, 00:16:57.826 "data_size": 65536 00:16:57.826 }, 00:16:57.826 { 00:16:57.826 "name": "BaseBdev4", 00:16:57.826 "uuid": "77202cb9-f8a9-5032-b490-29d09bf704ac", 00:16:57.826 "is_configured": true, 00:16:57.826 "data_offset": 0, 00:16:57.826 "data_size": 65536 00:16:57.826 } 00:16:57.826 ] 00:16:57.826 }' 00:16:57.826 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.826 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:57.826 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.826 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:57.826 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:57.826 08:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.826 08:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.826 [2024-11-27 08:48:54.438707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:57.826 [2024-11-27 08:48:54.453066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:16:57.826 08:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.826 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:57.826 [2024-11-27 08:48:54.456007] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:58.763 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.763 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.763 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.763 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.763 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.764 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.764 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.764 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.764 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.764 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.764 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.764 "name": "raid_bdev1", 00:16:58.764 "uuid": "90697d0e-1b1f-45d8-a5a6-33b9f37c4061", 00:16:58.764 "strip_size_kb": 0, 00:16:58.764 "state": "online", 00:16:58.764 "raid_level": "raid1", 00:16:58.764 "superblock": false, 00:16:58.764 "num_base_bdevs": 4, 00:16:58.764 "num_base_bdevs_discovered": 4, 00:16:58.764 "num_base_bdevs_operational": 4, 00:16:58.764 "process": { 00:16:58.764 "type": "rebuild", 00:16:58.764 "target": "spare", 00:16:58.764 "progress": { 00:16:58.764 "blocks": 20480, 00:16:58.764 "percent": 31 00:16:58.764 } 00:16:58.764 }, 00:16:58.764 "base_bdevs_list": [ 00:16:58.764 { 00:16:58.764 "name": "spare", 00:16:58.764 "uuid": "aae7cd26-b8ed-5476-9315-0e3e2c1dff44", 00:16:58.764 "is_configured": true, 00:16:58.764 "data_offset": 0, 00:16:58.764 "data_size": 65536 00:16:58.764 }, 00:16:58.764 { 00:16:58.764 "name": "BaseBdev2", 00:16:58.764 "uuid": "3eea0ce0-c9cd-5507-9f2e-2d4996f31fbb", 00:16:58.764 "is_configured": true, 00:16:58.764 "data_offset": 0, 00:16:58.764 "data_size": 65536 00:16:58.764 }, 00:16:58.764 { 00:16:58.764 "name": "BaseBdev3", 00:16:58.764 "uuid": "abb397a4-70e2-560f-9b98-95066537f57a", 00:16:58.764 "is_configured": true, 00:16:58.764 "data_offset": 0, 00:16:58.764 "data_size": 65536 00:16:58.764 }, 00:16:58.764 { 00:16:58.764 "name": "BaseBdev4", 00:16:58.764 "uuid": "77202cb9-f8a9-5032-b490-29d09bf704ac", 00:16:58.764 "is_configured": true, 00:16:58.764 "data_offset": 0, 00:16:58.764 "data_size": 65536 00:16:58.764 } 00:16:58.764 ] 00:16:58.764 }' 00:16:58.764 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.023 [2024-11-27 08:48:55.626036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:59.023 [2024-11-27 08:48:55.667779] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.023 "name": "raid_bdev1", 00:16:59.023 "uuid": "90697d0e-1b1f-45d8-a5a6-33b9f37c4061", 00:16:59.023 "strip_size_kb": 0, 00:16:59.023 "state": "online", 00:16:59.023 "raid_level": "raid1", 00:16:59.023 "superblock": false, 00:16:59.023 "num_base_bdevs": 4, 00:16:59.023 "num_base_bdevs_discovered": 3, 00:16:59.023 "num_base_bdevs_operational": 3, 00:16:59.023 "process": { 00:16:59.023 "type": "rebuild", 00:16:59.023 "target": "spare", 00:16:59.023 "progress": { 00:16:59.023 "blocks": 24576, 00:16:59.023 "percent": 37 00:16:59.023 } 00:16:59.023 }, 00:16:59.023 "base_bdevs_list": [ 00:16:59.023 { 00:16:59.023 "name": "spare", 00:16:59.023 "uuid": "aae7cd26-b8ed-5476-9315-0e3e2c1dff44", 00:16:59.023 "is_configured": true, 00:16:59.023 "data_offset": 0, 00:16:59.023 "data_size": 65536 00:16:59.023 }, 00:16:59.023 { 00:16:59.023 "name": null, 00:16:59.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.023 "is_configured": false, 00:16:59.023 "data_offset": 0, 00:16:59.023 "data_size": 65536 00:16:59.023 }, 00:16:59.023 { 00:16:59.023 "name": "BaseBdev3", 00:16:59.023 "uuid": "abb397a4-70e2-560f-9b98-95066537f57a", 00:16:59.023 "is_configured": true, 00:16:59.023 "data_offset": 0, 00:16:59.023 "data_size": 65536 00:16:59.023 }, 00:16:59.023 { 00:16:59.023 "name": "BaseBdev4", 00:16:59.023 "uuid": "77202cb9-f8a9-5032-b490-29d09bf704ac", 00:16:59.023 "is_configured": true, 00:16:59.023 "data_offset": 0, 00:16:59.023 "data_size": 65536 00:16:59.023 } 00:16:59.023 ] 00:16:59.023 }' 00:16:59.023 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.282 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.282 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.282 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=489 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.283 "name": "raid_bdev1", 00:16:59.283 "uuid": "90697d0e-1b1f-45d8-a5a6-33b9f37c4061", 00:16:59.283 "strip_size_kb": 0, 00:16:59.283 "state": "online", 00:16:59.283 "raid_level": "raid1", 00:16:59.283 "superblock": false, 00:16:59.283 "num_base_bdevs": 4, 00:16:59.283 "num_base_bdevs_discovered": 3, 00:16:59.283 "num_base_bdevs_operational": 3, 00:16:59.283 "process": { 00:16:59.283 "type": "rebuild", 00:16:59.283 "target": "spare", 00:16:59.283 "progress": { 00:16:59.283 "blocks": 26624, 00:16:59.283 "percent": 40 00:16:59.283 } 00:16:59.283 }, 00:16:59.283 "base_bdevs_list": [ 00:16:59.283 { 00:16:59.283 "name": "spare", 00:16:59.283 "uuid": "aae7cd26-b8ed-5476-9315-0e3e2c1dff44", 00:16:59.283 "is_configured": true, 00:16:59.283 "data_offset": 0, 00:16:59.283 "data_size": 65536 00:16:59.283 }, 00:16:59.283 { 00:16:59.283 "name": null, 00:16:59.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.283 "is_configured": false, 00:16:59.283 "data_offset": 0, 00:16:59.283 "data_size": 65536 00:16:59.283 }, 00:16:59.283 { 00:16:59.283 "name": "BaseBdev3", 00:16:59.283 "uuid": "abb397a4-70e2-560f-9b98-95066537f57a", 00:16:59.283 "is_configured": true, 00:16:59.283 "data_offset": 0, 00:16:59.283 "data_size": 65536 00:16:59.283 }, 00:16:59.283 { 00:16:59.283 "name": "BaseBdev4", 00:16:59.283 "uuid": "77202cb9-f8a9-5032-b490-29d09bf704ac", 00:16:59.283 "is_configured": true, 00:16:59.283 "data_offset": 0, 00:16:59.283 "data_size": 65536 00:16:59.283 } 00:16:59.283 ] 00:16:59.283 }' 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.283 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.283 08:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.283 08:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.661 08:48:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.661 08:48:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.661 08:48:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.661 08:48:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.661 08:48:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.661 08:48:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.661 08:48:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.661 08:48:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.661 08:48:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.661 08:48:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.661 08:48:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.661 08:48:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.661 "name": "raid_bdev1", 00:17:00.661 "uuid": "90697d0e-1b1f-45d8-a5a6-33b9f37c4061", 00:17:00.661 "strip_size_kb": 0, 00:17:00.661 "state": "online", 00:17:00.661 "raid_level": "raid1", 00:17:00.661 "superblock": false, 00:17:00.661 "num_base_bdevs": 4, 00:17:00.661 "num_base_bdevs_discovered": 3, 00:17:00.661 "num_base_bdevs_operational": 3, 00:17:00.661 "process": { 00:17:00.661 "type": "rebuild", 00:17:00.661 "target": "spare", 00:17:00.661 "progress": { 00:17:00.661 "blocks": 51200, 00:17:00.661 "percent": 78 00:17:00.661 } 00:17:00.661 }, 00:17:00.662 "base_bdevs_list": [ 00:17:00.662 { 00:17:00.662 "name": "spare", 00:17:00.662 "uuid": "aae7cd26-b8ed-5476-9315-0e3e2c1dff44", 00:17:00.662 "is_configured": true, 00:17:00.662 "data_offset": 0, 00:17:00.662 "data_size": 65536 00:17:00.662 }, 00:17:00.662 { 00:17:00.662 "name": null, 00:17:00.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.662 "is_configured": false, 00:17:00.662 "data_offset": 0, 00:17:00.662 "data_size": 65536 00:17:00.662 }, 00:17:00.662 { 00:17:00.662 "name": "BaseBdev3", 00:17:00.662 "uuid": "abb397a4-70e2-560f-9b98-95066537f57a", 00:17:00.662 "is_configured": true, 00:17:00.662 "data_offset": 0, 00:17:00.662 "data_size": 65536 00:17:00.662 }, 00:17:00.662 { 00:17:00.662 "name": "BaseBdev4", 00:17:00.662 "uuid": "77202cb9-f8a9-5032-b490-29d09bf704ac", 00:17:00.662 "is_configured": true, 00:17:00.662 "data_offset": 0, 00:17:00.662 "data_size": 65536 00:17:00.662 } 00:17:00.662 ] 00:17:00.662 }' 00:17:00.662 08:48:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.662 08:48:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.662 08:48:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.662 08:48:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.662 08:48:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:01.229 [2024-11-27 08:48:57.687140] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:01.229 [2024-11-27 08:48:57.687288] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:01.229 [2024-11-27 08:48:57.687387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.488 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.488 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.488 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.488 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.488 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.488 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.488 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.488 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.488 08:48:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.488 08:48:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.488 08:48:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.488 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.488 "name": "raid_bdev1", 00:17:01.488 "uuid": "90697d0e-1b1f-45d8-a5a6-33b9f37c4061", 00:17:01.488 "strip_size_kb": 0, 00:17:01.488 "state": "online", 00:17:01.488 "raid_level": "raid1", 00:17:01.488 "superblock": false, 00:17:01.488 "num_base_bdevs": 4, 00:17:01.488 "num_base_bdevs_discovered": 3, 00:17:01.488 "num_base_bdevs_operational": 3, 00:17:01.488 "base_bdevs_list": [ 00:17:01.488 { 00:17:01.488 "name": "spare", 00:17:01.488 "uuid": "aae7cd26-b8ed-5476-9315-0e3e2c1dff44", 00:17:01.488 "is_configured": true, 00:17:01.488 "data_offset": 0, 00:17:01.488 "data_size": 65536 00:17:01.488 }, 00:17:01.488 { 00:17:01.488 "name": null, 00:17:01.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.488 "is_configured": false, 00:17:01.488 "data_offset": 0, 00:17:01.488 "data_size": 65536 00:17:01.488 }, 00:17:01.488 { 00:17:01.488 "name": "BaseBdev3", 00:17:01.488 "uuid": "abb397a4-70e2-560f-9b98-95066537f57a", 00:17:01.488 "is_configured": true, 00:17:01.488 "data_offset": 0, 00:17:01.488 "data_size": 65536 00:17:01.488 }, 00:17:01.488 { 00:17:01.488 "name": "BaseBdev4", 00:17:01.488 "uuid": "77202cb9-f8a9-5032-b490-29d09bf704ac", 00:17:01.488 "is_configured": true, 00:17:01.488 "data_offset": 0, 00:17:01.488 "data_size": 65536 00:17:01.488 } 00:17:01.488 ] 00:17:01.488 }' 00:17:01.488 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.747 "name": "raid_bdev1", 00:17:01.747 "uuid": "90697d0e-1b1f-45d8-a5a6-33b9f37c4061", 00:17:01.747 "strip_size_kb": 0, 00:17:01.747 "state": "online", 00:17:01.747 "raid_level": "raid1", 00:17:01.747 "superblock": false, 00:17:01.747 "num_base_bdevs": 4, 00:17:01.747 "num_base_bdevs_discovered": 3, 00:17:01.747 "num_base_bdevs_operational": 3, 00:17:01.747 "base_bdevs_list": [ 00:17:01.747 { 00:17:01.747 "name": "spare", 00:17:01.747 "uuid": "aae7cd26-b8ed-5476-9315-0e3e2c1dff44", 00:17:01.747 "is_configured": true, 00:17:01.747 "data_offset": 0, 00:17:01.747 "data_size": 65536 00:17:01.747 }, 00:17:01.747 { 00:17:01.747 "name": null, 00:17:01.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.747 "is_configured": false, 00:17:01.747 "data_offset": 0, 00:17:01.747 "data_size": 65536 00:17:01.747 }, 00:17:01.747 { 00:17:01.747 "name": "BaseBdev3", 00:17:01.747 "uuid": "abb397a4-70e2-560f-9b98-95066537f57a", 00:17:01.747 "is_configured": true, 00:17:01.747 "data_offset": 0, 00:17:01.747 "data_size": 65536 00:17:01.747 }, 00:17:01.747 { 00:17:01.747 "name": "BaseBdev4", 00:17:01.747 "uuid": "77202cb9-f8a9-5032-b490-29d09bf704ac", 00:17:01.747 "is_configured": true, 00:17:01.747 "data_offset": 0, 00:17:01.747 "data_size": 65536 00:17:01.747 } 00:17:01.747 ] 00:17:01.747 }' 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.747 08:48:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.748 08:48:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.748 08:48:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.006 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.006 "name": "raid_bdev1", 00:17:02.006 "uuid": "90697d0e-1b1f-45d8-a5a6-33b9f37c4061", 00:17:02.006 "strip_size_kb": 0, 00:17:02.006 "state": "online", 00:17:02.006 "raid_level": "raid1", 00:17:02.006 "superblock": false, 00:17:02.006 "num_base_bdevs": 4, 00:17:02.006 "num_base_bdevs_discovered": 3, 00:17:02.006 "num_base_bdevs_operational": 3, 00:17:02.006 "base_bdevs_list": [ 00:17:02.006 { 00:17:02.006 "name": "spare", 00:17:02.006 "uuid": "aae7cd26-b8ed-5476-9315-0e3e2c1dff44", 00:17:02.006 "is_configured": true, 00:17:02.006 "data_offset": 0, 00:17:02.006 "data_size": 65536 00:17:02.006 }, 00:17:02.006 { 00:17:02.006 "name": null, 00:17:02.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.006 "is_configured": false, 00:17:02.006 "data_offset": 0, 00:17:02.006 "data_size": 65536 00:17:02.006 }, 00:17:02.006 { 00:17:02.006 "name": "BaseBdev3", 00:17:02.006 "uuid": "abb397a4-70e2-560f-9b98-95066537f57a", 00:17:02.006 "is_configured": true, 00:17:02.006 "data_offset": 0, 00:17:02.006 "data_size": 65536 00:17:02.006 }, 00:17:02.006 { 00:17:02.006 "name": "BaseBdev4", 00:17:02.006 "uuid": "77202cb9-f8a9-5032-b490-29d09bf704ac", 00:17:02.006 "is_configured": true, 00:17:02.006 "data_offset": 0, 00:17:02.006 "data_size": 65536 00:17:02.006 } 00:17:02.006 ] 00:17:02.006 }' 00:17:02.006 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.006 08:48:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.265 08:48:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.265 08:48:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.265 08:48:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.265 [2024-11-27 08:48:59.001191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.265 [2024-11-27 08:48:59.001254] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.265 [2024-11-27 08:48:59.001399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.265 [2024-11-27 08:48:59.001533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.265 [2024-11-27 08:48:59.001551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:02.265 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.265 08:48:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.265 08:48:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:02.265 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.265 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.265 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.523 08:48:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:02.523 08:48:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:02.523 08:48:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:02.523 08:48:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:02.523 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:02.523 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:02.523 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:02.524 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:02.524 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:02.524 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:02.524 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:02.524 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.524 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:02.790 /dev/nbd0 00:17:02.790 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:02.790 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:02.790 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:17:02.790 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local i 00:17:02.790 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:02.790 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:02.790 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:17:02.790 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # break 00:17:02.790 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:17:02.790 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:17:02.790 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.790 1+0 records in 00:17:02.790 1+0 records out 00:17:02.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337861 s, 12.1 MB/s 00:17:02.790 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.790 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # size=4096 00:17:02.791 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.791 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:17:02.791 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # return 0 00:17:02.791 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:02.791 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.791 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:03.049 /dev/nbd1 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local i 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # break 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.049 1+0 records in 00:17:03.049 1+0 records out 00:17:03.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398397 s, 10.3 MB/s 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # size=4096 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # return 0 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:03.049 08:48:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:03.308 08:48:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:03.308 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:03.308 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:03.308 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:03.308 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:03.308 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.308 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:03.566 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:03.566 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:03.566 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:03.566 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.566 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.566 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:03.566 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:03.566 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.566 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.566 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77931 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # '[' -z 77931 ']' 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # kill -0 77931 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # uname 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 77931 00:17:04.130 killing process with pid 77931 00:17:04.130 Received shutdown signal, test time was about 60.000000 seconds 00:17:04.130 00:17:04.130 Latency(us) 00:17:04.130 [2024-11-27T08:49:00.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.130 [2024-11-27T08:49:00.890Z] =================================================================================================================== 00:17:04.130 [2024-11-27T08:49:00.890Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 77931' 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # kill 77931 00:17:04.130 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@975 -- # wait 77931 00:17:04.130 [2024-11-27 08:49:00.632749] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.388 [2024-11-27 08:49:01.059367] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:05.765 00:17:05.765 real 0m21.027s 00:17:05.765 user 0m23.598s 00:17:05.765 sys 0m3.684s 00:17:05.765 ************************************ 00:17:05.765 END TEST raid_rebuild_test 00:17:05.765 ************************************ 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.765 08:49:02 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:17:05.765 08:49:02 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:17:05.765 08:49:02 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:17:05.765 08:49:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:05.765 ************************************ 00:17:05.765 START TEST raid_rebuild_test_sb 00:17:05.765 ************************************ 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # raid_rebuild_test raid1 4 true false true 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.765 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:05.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78411 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78411 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@832 -- # '[' -z 78411 ']' 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local max_retries=100 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@841 -- # xtrace_disable 00:17:05.766 08:49:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.766 [2024-11-27 08:49:02.326848] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:17:05.766 [2024-11-27 08:49:02.327399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78411 ] 00:17:05.766 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:05.766 Zero copy mechanism will not be used. 00:17:05.766 [2024-11-27 08:49:02.516227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.024 [2024-11-27 08:49:02.663989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.283 [2024-11-27 08:49:02.890197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.283 [2024-11-27 08:49:02.890563] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.541 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:17:06.541 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # return 0 00:17:06.541 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.541 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:06.541 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.541 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.800 BaseBdev1_malloc 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.800 [2024-11-27 08:49:03.312315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:06.800 [2024-11-27 08:49:03.312617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.800 [2024-11-27 08:49:03.312695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:06.800 [2024-11-27 08:49:03.312872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.800 [2024-11-27 08:49:03.316035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.800 [2024-11-27 08:49:03.316257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:06.800 BaseBdev1 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.800 BaseBdev2_malloc 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.800 [2024-11-27 08:49:03.370531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:06.800 [2024-11-27 08:49:03.370740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.800 [2024-11-27 08:49:03.370781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:06.800 [2024-11-27 08:49:03.370804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.800 [2024-11-27 08:49:03.373866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.800 [2024-11-27 08:49:03.374078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:06.800 BaseBdev2 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.800 BaseBdev3_malloc 00:17:06.800 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.801 [2024-11-27 08:49:03.436654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:06.801 [2024-11-27 08:49:03.436913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.801 [2024-11-27 08:49:03.436990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:06.801 [2024-11-27 08:49:03.437210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.801 [2024-11-27 08:49:03.440284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.801 [2024-11-27 08:49:03.440485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:06.801 BaseBdev3 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.801 BaseBdev4_malloc 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.801 [2024-11-27 08:49:03.494426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:06.801 [2024-11-27 08:49:03.494495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.801 [2024-11-27 08:49:03.494547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:06.801 [2024-11-27 08:49:03.494568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.801 [2024-11-27 08:49:03.497549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.801 [2024-11-27 08:49:03.497613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:06.801 BaseBdev4 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.801 spare_malloc 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.801 spare_delay 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.801 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.060 [2024-11-27 08:49:03.560932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:07.060 [2024-11-27 08:49:03.561188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.060 [2024-11-27 08:49:03.561260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:07.060 [2024-11-27 08:49:03.561443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.060 [2024-11-27 08:49:03.564551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.060 [2024-11-27 08:49:03.564771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:07.060 spare 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.060 [2024-11-27 08:49:03.573081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.060 [2024-11-27 08:49:03.575823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.060 [2024-11-27 08:49:03.575919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.060 [2024-11-27 08:49:03.576013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:07.060 [2024-11-27 08:49:03.576270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:07.060 [2024-11-27 08:49:03.576297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:07.060 [2024-11-27 08:49:03.576638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:07.060 [2024-11-27 08:49:03.576879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:07.060 [2024-11-27 08:49:03.576895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:07.060 [2024-11-27 08:49:03.577152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.060 "name": "raid_bdev1", 00:17:07.060 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:07.060 "strip_size_kb": 0, 00:17:07.060 "state": "online", 00:17:07.060 "raid_level": "raid1", 00:17:07.060 "superblock": true, 00:17:07.060 "num_base_bdevs": 4, 00:17:07.060 "num_base_bdevs_discovered": 4, 00:17:07.060 "num_base_bdevs_operational": 4, 00:17:07.060 "base_bdevs_list": [ 00:17:07.060 { 00:17:07.060 "name": "BaseBdev1", 00:17:07.060 "uuid": "4f9838a8-cb73-5d4f-a64b-c6c7d258585e", 00:17:07.060 "is_configured": true, 00:17:07.060 "data_offset": 2048, 00:17:07.060 "data_size": 63488 00:17:07.060 }, 00:17:07.060 { 00:17:07.060 "name": "BaseBdev2", 00:17:07.060 "uuid": "050eaa76-8b81-5e03-82d9-d9eb22ec1f46", 00:17:07.060 "is_configured": true, 00:17:07.060 "data_offset": 2048, 00:17:07.060 "data_size": 63488 00:17:07.060 }, 00:17:07.060 { 00:17:07.060 "name": "BaseBdev3", 00:17:07.060 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:07.060 "is_configured": true, 00:17:07.060 "data_offset": 2048, 00:17:07.060 "data_size": 63488 00:17:07.060 }, 00:17:07.060 { 00:17:07.060 "name": "BaseBdev4", 00:17:07.060 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:07.060 "is_configured": true, 00:17:07.060 "data_offset": 2048, 00:17:07.060 "data_size": 63488 00:17:07.060 } 00:17:07.060 ] 00:17:07.060 }' 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.060 08:49:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.628 [2024-11-27 08:49:04.121818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:07.628 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:07.889 [2024-11-27 08:49:04.513474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:07.889 /dev/nbd0 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local i 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # break 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.889 1+0 records in 00:17:07.889 1+0 records out 00:17:07.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414041 s, 9.9 MB/s 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # size=4096 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # return 0 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:07.889 08:49:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:17:17.862 63488+0 records in 00:17:17.862 63488+0 records out 00:17:17.862 32505856 bytes (33 MB, 31 MiB) copied, 8.22905 s, 4.0 MB/s 00:17:17.862 08:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:17.862 08:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.862 08:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:17.862 08:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:17.862 08:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:17.862 08:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.862 08:49:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:17.862 [2024-11-27 08:49:13.105611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.862 [2024-11-27 08:49:13.121128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.862 "name": "raid_bdev1", 00:17:17.862 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:17.862 "strip_size_kb": 0, 00:17:17.862 "state": "online", 00:17:17.862 "raid_level": "raid1", 00:17:17.862 "superblock": true, 00:17:17.862 "num_base_bdevs": 4, 00:17:17.862 "num_base_bdevs_discovered": 3, 00:17:17.862 "num_base_bdevs_operational": 3, 00:17:17.862 "base_bdevs_list": [ 00:17:17.862 { 00:17:17.862 "name": null, 00:17:17.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.862 "is_configured": false, 00:17:17.862 "data_offset": 0, 00:17:17.862 "data_size": 63488 00:17:17.862 }, 00:17:17.862 { 00:17:17.862 "name": "BaseBdev2", 00:17:17.862 "uuid": "050eaa76-8b81-5e03-82d9-d9eb22ec1f46", 00:17:17.862 "is_configured": true, 00:17:17.862 "data_offset": 2048, 00:17:17.862 "data_size": 63488 00:17:17.862 }, 00:17:17.862 { 00:17:17.862 "name": "BaseBdev3", 00:17:17.862 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:17.862 "is_configured": true, 00:17:17.862 "data_offset": 2048, 00:17:17.862 "data_size": 63488 00:17:17.862 }, 00:17:17.862 { 00:17:17.862 "name": "BaseBdev4", 00:17:17.862 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:17.862 "is_configured": true, 00:17:17.862 "data_offset": 2048, 00:17:17.862 "data_size": 63488 00:17:17.862 } 00:17:17.862 ] 00:17:17.862 }' 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.862 [2024-11-27 08:49:13.625262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.862 [2024-11-27 08:49:13.640326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.862 08:49:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:17.862 [2024-11-27 08:49:13.643234] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.121 "name": "raid_bdev1", 00:17:18.121 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:18.121 "strip_size_kb": 0, 00:17:18.121 "state": "online", 00:17:18.121 "raid_level": "raid1", 00:17:18.121 "superblock": true, 00:17:18.121 "num_base_bdevs": 4, 00:17:18.121 "num_base_bdevs_discovered": 4, 00:17:18.121 "num_base_bdevs_operational": 4, 00:17:18.121 "process": { 00:17:18.121 "type": "rebuild", 00:17:18.121 "target": "spare", 00:17:18.121 "progress": { 00:17:18.121 "blocks": 20480, 00:17:18.121 "percent": 32 00:17:18.121 } 00:17:18.121 }, 00:17:18.121 "base_bdevs_list": [ 00:17:18.121 { 00:17:18.121 "name": "spare", 00:17:18.121 "uuid": "01ed303c-6a6d-5e4c-996d-8651b0bbd3ea", 00:17:18.121 "is_configured": true, 00:17:18.121 "data_offset": 2048, 00:17:18.121 "data_size": 63488 00:17:18.121 }, 00:17:18.121 { 00:17:18.121 "name": "BaseBdev2", 00:17:18.121 "uuid": "050eaa76-8b81-5e03-82d9-d9eb22ec1f46", 00:17:18.121 "is_configured": true, 00:17:18.121 "data_offset": 2048, 00:17:18.121 "data_size": 63488 00:17:18.121 }, 00:17:18.121 { 00:17:18.121 "name": "BaseBdev3", 00:17:18.121 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:18.121 "is_configured": true, 00:17:18.121 "data_offset": 2048, 00:17:18.121 "data_size": 63488 00:17:18.121 }, 00:17:18.121 { 00:17:18.121 "name": "BaseBdev4", 00:17:18.121 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:18.121 "is_configured": true, 00:17:18.121 "data_offset": 2048, 00:17:18.121 "data_size": 63488 00:17:18.121 } 00:17:18.121 ] 00:17:18.121 }' 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.121 08:49:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.121 [2024-11-27 08:49:14.817749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.121 [2024-11-27 08:49:14.855587] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:18.121 [2024-11-27 08:49:14.855924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.121 [2024-11-27 08:49:14.855958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.121 [2024-11-27 08:49:14.855986] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.380 "name": "raid_bdev1", 00:17:18.380 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:18.380 "strip_size_kb": 0, 00:17:18.380 "state": "online", 00:17:18.380 "raid_level": "raid1", 00:17:18.380 "superblock": true, 00:17:18.380 "num_base_bdevs": 4, 00:17:18.380 "num_base_bdevs_discovered": 3, 00:17:18.380 "num_base_bdevs_operational": 3, 00:17:18.380 "base_bdevs_list": [ 00:17:18.380 { 00:17:18.380 "name": null, 00:17:18.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.380 "is_configured": false, 00:17:18.380 "data_offset": 0, 00:17:18.380 "data_size": 63488 00:17:18.380 }, 00:17:18.380 { 00:17:18.380 "name": "BaseBdev2", 00:17:18.380 "uuid": "050eaa76-8b81-5e03-82d9-d9eb22ec1f46", 00:17:18.380 "is_configured": true, 00:17:18.380 "data_offset": 2048, 00:17:18.380 "data_size": 63488 00:17:18.380 }, 00:17:18.380 { 00:17:18.380 "name": "BaseBdev3", 00:17:18.380 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:18.380 "is_configured": true, 00:17:18.380 "data_offset": 2048, 00:17:18.380 "data_size": 63488 00:17:18.380 }, 00:17:18.380 { 00:17:18.380 "name": "BaseBdev4", 00:17:18.380 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:18.380 "is_configured": true, 00:17:18.380 "data_offset": 2048, 00:17:18.380 "data_size": 63488 00:17:18.380 } 00:17:18.380 ] 00:17:18.380 }' 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.380 08:49:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.947 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.947 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.947 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.947 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.947 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.947 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.947 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.947 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.947 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.947 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.947 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.947 "name": "raid_bdev1", 00:17:18.947 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:18.947 "strip_size_kb": 0, 00:17:18.947 "state": "online", 00:17:18.947 "raid_level": "raid1", 00:17:18.947 "superblock": true, 00:17:18.947 "num_base_bdevs": 4, 00:17:18.947 "num_base_bdevs_discovered": 3, 00:17:18.947 "num_base_bdevs_operational": 3, 00:17:18.947 "base_bdevs_list": [ 00:17:18.947 { 00:17:18.947 "name": null, 00:17:18.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.948 "is_configured": false, 00:17:18.948 "data_offset": 0, 00:17:18.948 "data_size": 63488 00:17:18.948 }, 00:17:18.948 { 00:17:18.948 "name": "BaseBdev2", 00:17:18.948 "uuid": "050eaa76-8b81-5e03-82d9-d9eb22ec1f46", 00:17:18.948 "is_configured": true, 00:17:18.948 "data_offset": 2048, 00:17:18.948 "data_size": 63488 00:17:18.948 }, 00:17:18.948 { 00:17:18.948 "name": "BaseBdev3", 00:17:18.948 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:18.948 "is_configured": true, 00:17:18.948 "data_offset": 2048, 00:17:18.948 "data_size": 63488 00:17:18.948 }, 00:17:18.948 { 00:17:18.948 "name": "BaseBdev4", 00:17:18.948 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:18.948 "is_configured": true, 00:17:18.948 "data_offset": 2048, 00:17:18.948 "data_size": 63488 00:17:18.948 } 00:17:18.948 ] 00:17:18.948 }' 00:17:18.948 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.948 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.948 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.948 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.948 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:18.948 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.948 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.948 [2024-11-27 08:49:15.550425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.948 [2024-11-27 08:49:15.566297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:17:18.948 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.948 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:18.948 [2024-11-27 08:49:15.569427] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:19.884 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.884 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.884 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.884 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.884 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.884 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.884 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.884 08:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.884 08:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.884 08:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.884 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.884 "name": "raid_bdev1", 00:17:19.884 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:19.884 "strip_size_kb": 0, 00:17:19.884 "state": "online", 00:17:19.884 "raid_level": "raid1", 00:17:19.884 "superblock": true, 00:17:19.884 "num_base_bdevs": 4, 00:17:19.884 "num_base_bdevs_discovered": 4, 00:17:19.884 "num_base_bdevs_operational": 4, 00:17:19.884 "process": { 00:17:19.884 "type": "rebuild", 00:17:19.884 "target": "spare", 00:17:19.884 "progress": { 00:17:19.884 "blocks": 20480, 00:17:19.884 "percent": 32 00:17:19.884 } 00:17:19.884 }, 00:17:19.884 "base_bdevs_list": [ 00:17:19.884 { 00:17:19.884 "name": "spare", 00:17:19.884 "uuid": "01ed303c-6a6d-5e4c-996d-8651b0bbd3ea", 00:17:19.884 "is_configured": true, 00:17:19.884 "data_offset": 2048, 00:17:19.884 "data_size": 63488 00:17:19.884 }, 00:17:19.884 { 00:17:19.884 "name": "BaseBdev2", 00:17:19.884 "uuid": "050eaa76-8b81-5e03-82d9-d9eb22ec1f46", 00:17:19.884 "is_configured": true, 00:17:19.884 "data_offset": 2048, 00:17:19.884 "data_size": 63488 00:17:19.884 }, 00:17:19.884 { 00:17:19.884 "name": "BaseBdev3", 00:17:19.884 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:19.884 "is_configured": true, 00:17:19.884 "data_offset": 2048, 00:17:19.884 "data_size": 63488 00:17:19.884 }, 00:17:19.884 { 00:17:19.884 "name": "BaseBdev4", 00:17:19.884 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:19.884 "is_configured": true, 00:17:19.884 "data_offset": 2048, 00:17:19.884 "data_size": 63488 00:17:19.884 } 00:17:19.884 ] 00:17:19.884 }' 00:17:19.884 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.142 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.142 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.142 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.142 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:20.142 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:20.143 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.143 [2024-11-27 08:49:16.731357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:20.143 [2024-11-27 08:49:16.881246] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.143 08:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.402 08:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.402 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.402 "name": "raid_bdev1", 00:17:20.402 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:20.402 "strip_size_kb": 0, 00:17:20.402 "state": "online", 00:17:20.402 "raid_level": "raid1", 00:17:20.402 "superblock": true, 00:17:20.402 "num_base_bdevs": 4, 00:17:20.402 "num_base_bdevs_discovered": 3, 00:17:20.402 "num_base_bdevs_operational": 3, 00:17:20.402 "process": { 00:17:20.402 "type": "rebuild", 00:17:20.402 "target": "spare", 00:17:20.402 "progress": { 00:17:20.402 "blocks": 24576, 00:17:20.402 "percent": 38 00:17:20.402 } 00:17:20.402 }, 00:17:20.402 "base_bdevs_list": [ 00:17:20.402 { 00:17:20.402 "name": "spare", 00:17:20.402 "uuid": "01ed303c-6a6d-5e4c-996d-8651b0bbd3ea", 00:17:20.402 "is_configured": true, 00:17:20.402 "data_offset": 2048, 00:17:20.402 "data_size": 63488 00:17:20.402 }, 00:17:20.402 { 00:17:20.402 "name": null, 00:17:20.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.402 "is_configured": false, 00:17:20.402 "data_offset": 0, 00:17:20.402 "data_size": 63488 00:17:20.402 }, 00:17:20.402 { 00:17:20.402 "name": "BaseBdev3", 00:17:20.402 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:20.402 "is_configured": true, 00:17:20.402 "data_offset": 2048, 00:17:20.402 "data_size": 63488 00:17:20.402 }, 00:17:20.402 { 00:17:20.402 "name": "BaseBdev4", 00:17:20.402 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:20.402 "is_configured": true, 00:17:20.402 "data_offset": 2048, 00:17:20.402 "data_size": 63488 00:17:20.402 } 00:17:20.402 ] 00:17:20.402 }' 00:17:20.402 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.402 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.402 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=511 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.402 "name": "raid_bdev1", 00:17:20.402 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:20.402 "strip_size_kb": 0, 00:17:20.402 "state": "online", 00:17:20.402 "raid_level": "raid1", 00:17:20.402 "superblock": true, 00:17:20.402 "num_base_bdevs": 4, 00:17:20.402 "num_base_bdevs_discovered": 3, 00:17:20.402 "num_base_bdevs_operational": 3, 00:17:20.402 "process": { 00:17:20.402 "type": "rebuild", 00:17:20.402 "target": "spare", 00:17:20.402 "progress": { 00:17:20.402 "blocks": 26624, 00:17:20.402 "percent": 41 00:17:20.402 } 00:17:20.402 }, 00:17:20.402 "base_bdevs_list": [ 00:17:20.402 { 00:17:20.402 "name": "spare", 00:17:20.402 "uuid": "01ed303c-6a6d-5e4c-996d-8651b0bbd3ea", 00:17:20.402 "is_configured": true, 00:17:20.402 "data_offset": 2048, 00:17:20.402 "data_size": 63488 00:17:20.402 }, 00:17:20.402 { 00:17:20.402 "name": null, 00:17:20.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.402 "is_configured": false, 00:17:20.402 "data_offset": 0, 00:17:20.402 "data_size": 63488 00:17:20.402 }, 00:17:20.402 { 00:17:20.402 "name": "BaseBdev3", 00:17:20.402 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:20.402 "is_configured": true, 00:17:20.402 "data_offset": 2048, 00:17:20.402 "data_size": 63488 00:17:20.402 }, 00:17:20.402 { 00:17:20.402 "name": "BaseBdev4", 00:17:20.402 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:20.402 "is_configured": true, 00:17:20.402 "data_offset": 2048, 00:17:20.402 "data_size": 63488 00:17:20.402 } 00:17:20.402 ] 00:17:20.402 }' 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.402 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.661 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.661 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.596 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.596 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.596 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.596 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.596 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.596 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.596 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.596 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.596 08:49:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.596 08:49:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.596 08:49:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.597 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.597 "name": "raid_bdev1", 00:17:21.597 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:21.597 "strip_size_kb": 0, 00:17:21.597 "state": "online", 00:17:21.597 "raid_level": "raid1", 00:17:21.597 "superblock": true, 00:17:21.597 "num_base_bdevs": 4, 00:17:21.597 "num_base_bdevs_discovered": 3, 00:17:21.597 "num_base_bdevs_operational": 3, 00:17:21.597 "process": { 00:17:21.597 "type": "rebuild", 00:17:21.597 "target": "spare", 00:17:21.597 "progress": { 00:17:21.597 "blocks": 51200, 00:17:21.597 "percent": 80 00:17:21.597 } 00:17:21.597 }, 00:17:21.597 "base_bdevs_list": [ 00:17:21.597 { 00:17:21.597 "name": "spare", 00:17:21.597 "uuid": "01ed303c-6a6d-5e4c-996d-8651b0bbd3ea", 00:17:21.597 "is_configured": true, 00:17:21.597 "data_offset": 2048, 00:17:21.597 "data_size": 63488 00:17:21.597 }, 00:17:21.597 { 00:17:21.597 "name": null, 00:17:21.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.597 "is_configured": false, 00:17:21.597 "data_offset": 0, 00:17:21.597 "data_size": 63488 00:17:21.597 }, 00:17:21.597 { 00:17:21.597 "name": "BaseBdev3", 00:17:21.597 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:21.597 "is_configured": true, 00:17:21.597 "data_offset": 2048, 00:17:21.597 "data_size": 63488 00:17:21.597 }, 00:17:21.597 { 00:17:21.597 "name": "BaseBdev4", 00:17:21.597 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:21.597 "is_configured": true, 00:17:21.597 "data_offset": 2048, 00:17:21.597 "data_size": 63488 00:17:21.597 } 00:17:21.597 ] 00:17:21.597 }' 00:17:21.597 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.597 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.597 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.855 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.855 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.113 [2024-11-27 08:49:18.800496] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:22.113 [2024-11-27 08:49:18.800634] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:22.113 [2024-11-27 08:49:18.800836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.680 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.680 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.680 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.680 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.680 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.680 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.680 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.680 08:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.680 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.680 08:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.680 08:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.680 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.680 "name": "raid_bdev1", 00:17:22.680 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:22.680 "strip_size_kb": 0, 00:17:22.680 "state": "online", 00:17:22.680 "raid_level": "raid1", 00:17:22.680 "superblock": true, 00:17:22.680 "num_base_bdevs": 4, 00:17:22.680 "num_base_bdevs_discovered": 3, 00:17:22.680 "num_base_bdevs_operational": 3, 00:17:22.680 "base_bdevs_list": [ 00:17:22.680 { 00:17:22.680 "name": "spare", 00:17:22.680 "uuid": "01ed303c-6a6d-5e4c-996d-8651b0bbd3ea", 00:17:22.680 "is_configured": true, 00:17:22.680 "data_offset": 2048, 00:17:22.680 "data_size": 63488 00:17:22.680 }, 00:17:22.680 { 00:17:22.680 "name": null, 00:17:22.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.680 "is_configured": false, 00:17:22.680 "data_offset": 0, 00:17:22.680 "data_size": 63488 00:17:22.680 }, 00:17:22.680 { 00:17:22.680 "name": "BaseBdev3", 00:17:22.680 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:22.680 "is_configured": true, 00:17:22.680 "data_offset": 2048, 00:17:22.680 "data_size": 63488 00:17:22.680 }, 00:17:22.680 { 00:17:22.680 "name": "BaseBdev4", 00:17:22.680 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:22.680 "is_configured": true, 00:17:22.680 "data_offset": 2048, 00:17:22.680 "data_size": 63488 00:17:22.680 } 00:17:22.680 ] 00:17:22.680 }' 00:17:22.680 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.939 "name": "raid_bdev1", 00:17:22.939 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:22.939 "strip_size_kb": 0, 00:17:22.939 "state": "online", 00:17:22.939 "raid_level": "raid1", 00:17:22.939 "superblock": true, 00:17:22.939 "num_base_bdevs": 4, 00:17:22.939 "num_base_bdevs_discovered": 3, 00:17:22.939 "num_base_bdevs_operational": 3, 00:17:22.939 "base_bdevs_list": [ 00:17:22.939 { 00:17:22.939 "name": "spare", 00:17:22.939 "uuid": "01ed303c-6a6d-5e4c-996d-8651b0bbd3ea", 00:17:22.939 "is_configured": true, 00:17:22.939 "data_offset": 2048, 00:17:22.939 "data_size": 63488 00:17:22.939 }, 00:17:22.939 { 00:17:22.939 "name": null, 00:17:22.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.939 "is_configured": false, 00:17:22.939 "data_offset": 0, 00:17:22.939 "data_size": 63488 00:17:22.939 }, 00:17:22.939 { 00:17:22.939 "name": "BaseBdev3", 00:17:22.939 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:22.939 "is_configured": true, 00:17:22.939 "data_offset": 2048, 00:17:22.939 "data_size": 63488 00:17:22.939 }, 00:17:22.939 { 00:17:22.939 "name": "BaseBdev4", 00:17:22.939 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:22.939 "is_configured": true, 00:17:22.939 "data_offset": 2048, 00:17:22.939 "data_size": 63488 00:17:22.939 } 00:17:22.939 ] 00:17:22.939 }' 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.939 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.200 "name": "raid_bdev1", 00:17:23.200 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:23.200 "strip_size_kb": 0, 00:17:23.200 "state": "online", 00:17:23.200 "raid_level": "raid1", 00:17:23.200 "superblock": true, 00:17:23.200 "num_base_bdevs": 4, 00:17:23.200 "num_base_bdevs_discovered": 3, 00:17:23.200 "num_base_bdevs_operational": 3, 00:17:23.200 "base_bdevs_list": [ 00:17:23.200 { 00:17:23.200 "name": "spare", 00:17:23.200 "uuid": "01ed303c-6a6d-5e4c-996d-8651b0bbd3ea", 00:17:23.200 "is_configured": true, 00:17:23.200 "data_offset": 2048, 00:17:23.200 "data_size": 63488 00:17:23.200 }, 00:17:23.200 { 00:17:23.200 "name": null, 00:17:23.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.200 "is_configured": false, 00:17:23.200 "data_offset": 0, 00:17:23.200 "data_size": 63488 00:17:23.200 }, 00:17:23.200 { 00:17:23.200 "name": "BaseBdev3", 00:17:23.200 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:23.200 "is_configured": true, 00:17:23.200 "data_offset": 2048, 00:17:23.200 "data_size": 63488 00:17:23.200 }, 00:17:23.200 { 00:17:23.200 "name": "BaseBdev4", 00:17:23.200 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:23.200 "is_configured": true, 00:17:23.200 "data_offset": 2048, 00:17:23.200 "data_size": 63488 00:17:23.200 } 00:17:23.200 ] 00:17:23.200 }' 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.200 08:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.460 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:23.460 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.460 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.718 [2024-11-27 08:49:20.222712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.718 [2024-11-27 08:49:20.222935] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.718 [2024-11-27 08:49:20.223245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.718 [2024-11-27 08:49:20.223459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.718 [2024-11-27 08:49:20.223480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:23.718 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:23.719 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:23.977 /dev/nbd0 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local i 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # break 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.977 1+0 records in 00:17:23.977 1+0 records out 00:17:23.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316292 s, 13.0 MB/s 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # size=4096 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # return 0 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:23.977 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:24.235 /dev/nbd1 00:17:24.235 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:24.235 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:24.235 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:17:24.235 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local i 00:17:24.235 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:24.235 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:24.235 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:17:24.235 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # break 00:17:24.235 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:17:24.235 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:17:24.235 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.235 1+0 records in 00:17:24.235 1+0 records out 00:17:24.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413333 s, 9.9 MB/s 00:17:24.235 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.235 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # size=4096 00:17:24.235 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.494 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:17:24.494 08:49:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # return 0 00:17:24.494 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:24.494 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:24.494 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:24.494 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:24.494 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.494 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:24.494 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:24.494 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:24.494 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.494 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:24.753 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:24.753 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:24.753 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:24.753 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.753 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.753 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:24.753 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:24.753 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.753 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.753 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:25.011 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:25.011 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:25.011 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:25.011 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.011 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.011 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:25.011 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:25.011 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.011 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:25.011 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:25.011 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.011 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.334 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.334 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:25.334 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.334 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.334 [2024-11-27 08:49:21.777787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:25.334 [2024-11-27 08:49:21.777856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.334 [2024-11-27 08:49:21.777893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:25.334 [2024-11-27 08:49:21.777910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.334 [2024-11-27 08:49:21.781163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.334 [2024-11-27 08:49:21.781209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:25.334 [2024-11-27 08:49:21.781371] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:25.334 [2024-11-27 08:49:21.781442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.334 [2024-11-27 08:49:21.781641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:25.334 [2024-11-27 08:49:21.781778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:25.334 spare 00:17:25.334 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.334 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:25.334 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.334 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.334 [2024-11-27 08:49:21.881950] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:25.334 [2024-11-27 08:49:21.882168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:25.334 [2024-11-27 08:49:21.882670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:17:25.334 [2024-11-27 08:49:21.883068] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:25.334 [2024-11-27 08:49:21.883214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:25.335 [2024-11-27 08:49:21.883581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.335 "name": "raid_bdev1", 00:17:25.335 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:25.335 "strip_size_kb": 0, 00:17:25.335 "state": "online", 00:17:25.335 "raid_level": "raid1", 00:17:25.335 "superblock": true, 00:17:25.335 "num_base_bdevs": 4, 00:17:25.335 "num_base_bdevs_discovered": 3, 00:17:25.335 "num_base_bdevs_operational": 3, 00:17:25.335 "base_bdevs_list": [ 00:17:25.335 { 00:17:25.335 "name": "spare", 00:17:25.335 "uuid": "01ed303c-6a6d-5e4c-996d-8651b0bbd3ea", 00:17:25.335 "is_configured": true, 00:17:25.335 "data_offset": 2048, 00:17:25.335 "data_size": 63488 00:17:25.335 }, 00:17:25.335 { 00:17:25.335 "name": null, 00:17:25.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.335 "is_configured": false, 00:17:25.335 "data_offset": 2048, 00:17:25.335 "data_size": 63488 00:17:25.335 }, 00:17:25.335 { 00:17:25.335 "name": "BaseBdev3", 00:17:25.335 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:25.335 "is_configured": true, 00:17:25.335 "data_offset": 2048, 00:17:25.335 "data_size": 63488 00:17:25.335 }, 00:17:25.335 { 00:17:25.335 "name": "BaseBdev4", 00:17:25.335 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:25.335 "is_configured": true, 00:17:25.335 "data_offset": 2048, 00:17:25.335 "data_size": 63488 00:17:25.335 } 00:17:25.335 ] 00:17:25.335 }' 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.335 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.902 "name": "raid_bdev1", 00:17:25.902 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:25.902 "strip_size_kb": 0, 00:17:25.902 "state": "online", 00:17:25.902 "raid_level": "raid1", 00:17:25.902 "superblock": true, 00:17:25.902 "num_base_bdevs": 4, 00:17:25.902 "num_base_bdevs_discovered": 3, 00:17:25.902 "num_base_bdevs_operational": 3, 00:17:25.902 "base_bdevs_list": [ 00:17:25.902 { 00:17:25.902 "name": "spare", 00:17:25.902 "uuid": "01ed303c-6a6d-5e4c-996d-8651b0bbd3ea", 00:17:25.902 "is_configured": true, 00:17:25.902 "data_offset": 2048, 00:17:25.902 "data_size": 63488 00:17:25.902 }, 00:17:25.902 { 00:17:25.902 "name": null, 00:17:25.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.902 "is_configured": false, 00:17:25.902 "data_offset": 2048, 00:17:25.902 "data_size": 63488 00:17:25.902 }, 00:17:25.902 { 00:17:25.902 "name": "BaseBdev3", 00:17:25.902 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:25.902 "is_configured": true, 00:17:25.902 "data_offset": 2048, 00:17:25.902 "data_size": 63488 00:17:25.902 }, 00:17:25.902 { 00:17:25.902 "name": "BaseBdev4", 00:17:25.902 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:25.902 "is_configured": true, 00:17:25.902 "data_offset": 2048, 00:17:25.902 "data_size": 63488 00:17:25.902 } 00:17:25.902 ] 00:17:25.902 }' 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.902 [2024-11-27 08:49:22.598256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.902 "name": "raid_bdev1", 00:17:25.902 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:25.902 "strip_size_kb": 0, 00:17:25.902 "state": "online", 00:17:25.902 "raid_level": "raid1", 00:17:25.902 "superblock": true, 00:17:25.902 "num_base_bdevs": 4, 00:17:25.902 "num_base_bdevs_discovered": 2, 00:17:25.902 "num_base_bdevs_operational": 2, 00:17:25.902 "base_bdevs_list": [ 00:17:25.902 { 00:17:25.902 "name": null, 00:17:25.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.902 "is_configured": false, 00:17:25.902 "data_offset": 0, 00:17:25.902 "data_size": 63488 00:17:25.902 }, 00:17:25.902 { 00:17:25.902 "name": null, 00:17:25.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.902 "is_configured": false, 00:17:25.902 "data_offset": 2048, 00:17:25.902 "data_size": 63488 00:17:25.902 }, 00:17:25.902 { 00:17:25.902 "name": "BaseBdev3", 00:17:25.902 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:25.902 "is_configured": true, 00:17:25.902 "data_offset": 2048, 00:17:25.902 "data_size": 63488 00:17:25.902 }, 00:17:25.902 { 00:17:25.902 "name": "BaseBdev4", 00:17:25.902 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:25.902 "is_configured": true, 00:17:25.902 "data_offset": 2048, 00:17:25.902 "data_size": 63488 00:17:25.902 } 00:17:25.902 ] 00:17:25.902 }' 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.902 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.471 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:26.471 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.471 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.471 [2024-11-27 08:49:23.134495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.471 [2024-11-27 08:49:23.134789] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:26.471 [2024-11-27 08:49:23.134816] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:26.471 [2024-11-27 08:49:23.134919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.471 [2024-11-27 08:49:23.149511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:17:26.471 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.471 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:26.471 [2024-11-27 08:49:23.152411] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:27.406 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.406 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.406 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.406 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.406 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.406 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.406 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.407 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.407 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.665 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.665 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.665 "name": "raid_bdev1", 00:17:27.665 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:27.665 "strip_size_kb": 0, 00:17:27.665 "state": "online", 00:17:27.665 "raid_level": "raid1", 00:17:27.665 "superblock": true, 00:17:27.665 "num_base_bdevs": 4, 00:17:27.665 "num_base_bdevs_discovered": 3, 00:17:27.665 "num_base_bdevs_operational": 3, 00:17:27.665 "process": { 00:17:27.665 "type": "rebuild", 00:17:27.665 "target": "spare", 00:17:27.665 "progress": { 00:17:27.665 "blocks": 20480, 00:17:27.665 "percent": 32 00:17:27.665 } 00:17:27.665 }, 00:17:27.665 "base_bdevs_list": [ 00:17:27.665 { 00:17:27.665 "name": "spare", 00:17:27.665 "uuid": "01ed303c-6a6d-5e4c-996d-8651b0bbd3ea", 00:17:27.665 "is_configured": true, 00:17:27.665 "data_offset": 2048, 00:17:27.665 "data_size": 63488 00:17:27.665 }, 00:17:27.665 { 00:17:27.665 "name": null, 00:17:27.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.665 "is_configured": false, 00:17:27.665 "data_offset": 2048, 00:17:27.665 "data_size": 63488 00:17:27.665 }, 00:17:27.665 { 00:17:27.665 "name": "BaseBdev3", 00:17:27.665 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:27.665 "is_configured": true, 00:17:27.665 "data_offset": 2048, 00:17:27.665 "data_size": 63488 00:17:27.665 }, 00:17:27.665 { 00:17:27.665 "name": "BaseBdev4", 00:17:27.665 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:27.665 "is_configured": true, 00:17:27.665 "data_offset": 2048, 00:17:27.665 "data_size": 63488 00:17:27.665 } 00:17:27.665 ] 00:17:27.665 }' 00:17:27.665 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.665 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.665 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.665 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.666 [2024-11-27 08:49:24.318332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.666 [2024-11-27 08:49:24.364079] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:27.666 [2024-11-27 08:49:24.364329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.666 [2024-11-27 08:49:24.364405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.666 [2024-11-27 08:49:24.364420] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.666 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.925 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.925 "name": "raid_bdev1", 00:17:27.925 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:27.925 "strip_size_kb": 0, 00:17:27.925 "state": "online", 00:17:27.925 "raid_level": "raid1", 00:17:27.925 "superblock": true, 00:17:27.925 "num_base_bdevs": 4, 00:17:27.925 "num_base_bdevs_discovered": 2, 00:17:27.925 "num_base_bdevs_operational": 2, 00:17:27.925 "base_bdevs_list": [ 00:17:27.925 { 00:17:27.925 "name": null, 00:17:27.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.925 "is_configured": false, 00:17:27.925 "data_offset": 0, 00:17:27.925 "data_size": 63488 00:17:27.925 }, 00:17:27.925 { 00:17:27.925 "name": null, 00:17:27.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.925 "is_configured": false, 00:17:27.925 "data_offset": 2048, 00:17:27.925 "data_size": 63488 00:17:27.925 }, 00:17:27.925 { 00:17:27.925 "name": "BaseBdev3", 00:17:27.925 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:27.925 "is_configured": true, 00:17:27.925 "data_offset": 2048, 00:17:27.925 "data_size": 63488 00:17:27.925 }, 00:17:27.925 { 00:17:27.925 "name": "BaseBdev4", 00:17:27.925 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:27.925 "is_configured": true, 00:17:27.925 "data_offset": 2048, 00:17:27.925 "data_size": 63488 00:17:27.925 } 00:17:27.925 ] 00:17:27.925 }' 00:17:27.925 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.925 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.184 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:28.184 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.184 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.184 [2024-11-27 08:49:24.925460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:28.184 [2024-11-27 08:49:24.925707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.184 [2024-11-27 08:49:24.925769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:28.184 [2024-11-27 08:49:24.925789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.184 [2024-11-27 08:49:24.926533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.184 [2024-11-27 08:49:24.926566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:28.184 [2024-11-27 08:49:24.926707] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:28.184 [2024-11-27 08:49:24.926864] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:28.184 [2024-11-27 08:49:24.926901] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:28.184 [2024-11-27 08:49:24.926950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:28.184 [2024-11-27 08:49:24.941151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:17:28.443 spare 00:17:28.443 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.443 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:28.443 [2024-11-27 08:49:24.944064] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:29.378 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.378 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.378 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.378 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.378 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.378 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.378 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.378 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.378 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.378 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.378 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.378 "name": "raid_bdev1", 00:17:29.378 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:29.378 "strip_size_kb": 0, 00:17:29.378 "state": "online", 00:17:29.378 "raid_level": "raid1", 00:17:29.378 "superblock": true, 00:17:29.378 "num_base_bdevs": 4, 00:17:29.378 "num_base_bdevs_discovered": 3, 00:17:29.378 "num_base_bdevs_operational": 3, 00:17:29.378 "process": { 00:17:29.378 "type": "rebuild", 00:17:29.378 "target": "spare", 00:17:29.378 "progress": { 00:17:29.378 "blocks": 20480, 00:17:29.378 "percent": 32 00:17:29.378 } 00:17:29.378 }, 00:17:29.378 "base_bdevs_list": [ 00:17:29.378 { 00:17:29.378 "name": "spare", 00:17:29.378 "uuid": "01ed303c-6a6d-5e4c-996d-8651b0bbd3ea", 00:17:29.378 "is_configured": true, 00:17:29.378 "data_offset": 2048, 00:17:29.378 "data_size": 63488 00:17:29.378 }, 00:17:29.378 { 00:17:29.378 "name": null, 00:17:29.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.378 "is_configured": false, 00:17:29.378 "data_offset": 2048, 00:17:29.378 "data_size": 63488 00:17:29.378 }, 00:17:29.378 { 00:17:29.378 "name": "BaseBdev3", 00:17:29.378 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:29.378 "is_configured": true, 00:17:29.378 "data_offset": 2048, 00:17:29.378 "data_size": 63488 00:17:29.378 }, 00:17:29.378 { 00:17:29.379 "name": "BaseBdev4", 00:17:29.379 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:29.379 "is_configured": true, 00:17:29.379 "data_offset": 2048, 00:17:29.379 "data_size": 63488 00:17:29.379 } 00:17:29.379 ] 00:17:29.379 }' 00:17:29.379 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.379 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.379 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.379 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.379 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:29.379 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.379 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.379 [2024-11-27 08:49:26.122181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:29.637 [2024-11-27 08:49:26.155905] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:29.637 [2024-11-27 08:49:26.156233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.637 [2024-11-27 08:49:26.156490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:29.637 [2024-11-27 08:49:26.156627] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.637 "name": "raid_bdev1", 00:17:29.637 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:29.637 "strip_size_kb": 0, 00:17:29.637 "state": "online", 00:17:29.637 "raid_level": "raid1", 00:17:29.637 "superblock": true, 00:17:29.637 "num_base_bdevs": 4, 00:17:29.637 "num_base_bdevs_discovered": 2, 00:17:29.637 "num_base_bdevs_operational": 2, 00:17:29.637 "base_bdevs_list": [ 00:17:29.637 { 00:17:29.637 "name": null, 00:17:29.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.637 "is_configured": false, 00:17:29.637 "data_offset": 0, 00:17:29.637 "data_size": 63488 00:17:29.637 }, 00:17:29.637 { 00:17:29.637 "name": null, 00:17:29.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.637 "is_configured": false, 00:17:29.637 "data_offset": 2048, 00:17:29.637 "data_size": 63488 00:17:29.637 }, 00:17:29.637 { 00:17:29.637 "name": "BaseBdev3", 00:17:29.637 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:29.637 "is_configured": true, 00:17:29.637 "data_offset": 2048, 00:17:29.637 "data_size": 63488 00:17:29.637 }, 00:17:29.637 { 00:17:29.637 "name": "BaseBdev4", 00:17:29.637 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:29.637 "is_configured": true, 00:17:29.637 "data_offset": 2048, 00:17:29.637 "data_size": 63488 00:17:29.637 } 00:17:29.637 ] 00:17:29.637 }' 00:17:29.637 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.638 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.204 "name": "raid_bdev1", 00:17:30.204 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:30.204 "strip_size_kb": 0, 00:17:30.204 "state": "online", 00:17:30.204 "raid_level": "raid1", 00:17:30.204 "superblock": true, 00:17:30.204 "num_base_bdevs": 4, 00:17:30.204 "num_base_bdevs_discovered": 2, 00:17:30.204 "num_base_bdevs_operational": 2, 00:17:30.204 "base_bdevs_list": [ 00:17:30.204 { 00:17:30.204 "name": null, 00:17:30.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.204 "is_configured": false, 00:17:30.204 "data_offset": 0, 00:17:30.204 "data_size": 63488 00:17:30.204 }, 00:17:30.204 { 00:17:30.204 "name": null, 00:17:30.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.204 "is_configured": false, 00:17:30.204 "data_offset": 2048, 00:17:30.204 "data_size": 63488 00:17:30.204 }, 00:17:30.204 { 00:17:30.204 "name": "BaseBdev3", 00:17:30.204 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:30.204 "is_configured": true, 00:17:30.204 "data_offset": 2048, 00:17:30.204 "data_size": 63488 00:17:30.204 }, 00:17:30.204 { 00:17:30.204 "name": "BaseBdev4", 00:17:30.204 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:30.204 "is_configured": true, 00:17:30.204 "data_offset": 2048, 00:17:30.204 "data_size": 63488 00:17:30.204 } 00:17:30.204 ] 00:17:30.204 }' 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.204 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:30.205 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.205 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.205 [2024-11-27 08:49:26.918968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:30.205 [2024-11-27 08:49:26.919241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.205 [2024-11-27 08:49:26.919295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:30.205 [2024-11-27 08:49:26.919316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.205 [2024-11-27 08:49:26.920040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.205 [2024-11-27 08:49:26.920079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:30.205 [2024-11-27 08:49:26.920218] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:30.205 [2024-11-27 08:49:26.920255] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:30.205 [2024-11-27 08:49:26.920269] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:30.205 [2024-11-27 08:49:26.920301] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:30.205 BaseBdev1 00:17:30.205 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.205 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.579 "name": "raid_bdev1", 00:17:31.579 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:31.579 "strip_size_kb": 0, 00:17:31.579 "state": "online", 00:17:31.579 "raid_level": "raid1", 00:17:31.579 "superblock": true, 00:17:31.579 "num_base_bdevs": 4, 00:17:31.579 "num_base_bdevs_discovered": 2, 00:17:31.579 "num_base_bdevs_operational": 2, 00:17:31.579 "base_bdevs_list": [ 00:17:31.579 { 00:17:31.579 "name": null, 00:17:31.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.579 "is_configured": false, 00:17:31.579 "data_offset": 0, 00:17:31.579 "data_size": 63488 00:17:31.579 }, 00:17:31.579 { 00:17:31.579 "name": null, 00:17:31.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.579 "is_configured": false, 00:17:31.579 "data_offset": 2048, 00:17:31.579 "data_size": 63488 00:17:31.579 }, 00:17:31.579 { 00:17:31.579 "name": "BaseBdev3", 00:17:31.579 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:31.579 "is_configured": true, 00:17:31.579 "data_offset": 2048, 00:17:31.579 "data_size": 63488 00:17:31.579 }, 00:17:31.579 { 00:17:31.579 "name": "BaseBdev4", 00:17:31.579 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:31.579 "is_configured": true, 00:17:31.579 "data_offset": 2048, 00:17:31.579 "data_size": 63488 00:17:31.579 } 00:17:31.579 ] 00:17:31.579 }' 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.579 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.838 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.838 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.838 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.838 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.838 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.838 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.838 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.838 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.838 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.838 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.838 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.838 "name": "raid_bdev1", 00:17:31.838 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:31.838 "strip_size_kb": 0, 00:17:31.838 "state": "online", 00:17:31.838 "raid_level": "raid1", 00:17:31.838 "superblock": true, 00:17:31.838 "num_base_bdevs": 4, 00:17:31.838 "num_base_bdevs_discovered": 2, 00:17:31.838 "num_base_bdevs_operational": 2, 00:17:31.838 "base_bdevs_list": [ 00:17:31.838 { 00:17:31.838 "name": null, 00:17:31.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.838 "is_configured": false, 00:17:31.838 "data_offset": 0, 00:17:31.838 "data_size": 63488 00:17:31.838 }, 00:17:31.838 { 00:17:31.838 "name": null, 00:17:31.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.838 "is_configured": false, 00:17:31.838 "data_offset": 2048, 00:17:31.838 "data_size": 63488 00:17:31.838 }, 00:17:31.838 { 00:17:31.838 "name": "BaseBdev3", 00:17:31.838 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:31.838 "is_configured": true, 00:17:31.838 "data_offset": 2048, 00:17:31.838 "data_size": 63488 00:17:31.838 }, 00:17:31.838 { 00:17:31.838 "name": "BaseBdev4", 00:17:31.838 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:31.838 "is_configured": true, 00:17:31.838 "data_offset": 2048, 00:17:31.838 "data_size": 63488 00:17:31.838 } 00:17:31.838 ] 00:17:31.838 }' 00:17:31.838 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.838 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.838 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.097 [2024-11-27 08:49:28.611442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:32.097 [2024-11-27 08:49:28.611739] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:32.097 [2024-11-27 08:49:28.611763] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:32.097 request: 00:17:32.097 { 00:17:32.097 "base_bdev": "BaseBdev1", 00:17:32.097 "raid_bdev": "raid_bdev1", 00:17:32.097 "method": "bdev_raid_add_base_bdev", 00:17:32.097 "req_id": 1 00:17:32.097 } 00:17:32.097 Got JSON-RPC error response 00:17:32.097 response: 00:17:32.097 { 00:17:32.097 "code": -22, 00:17:32.097 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:32.097 } 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.097 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.030 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.030 "name": "raid_bdev1", 00:17:33.030 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:33.030 "strip_size_kb": 0, 00:17:33.030 "state": "online", 00:17:33.030 "raid_level": "raid1", 00:17:33.030 "superblock": true, 00:17:33.030 "num_base_bdevs": 4, 00:17:33.030 "num_base_bdevs_discovered": 2, 00:17:33.030 "num_base_bdevs_operational": 2, 00:17:33.030 "base_bdevs_list": [ 00:17:33.030 { 00:17:33.030 "name": null, 00:17:33.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.031 "is_configured": false, 00:17:33.031 "data_offset": 0, 00:17:33.031 "data_size": 63488 00:17:33.031 }, 00:17:33.031 { 00:17:33.031 "name": null, 00:17:33.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.031 "is_configured": false, 00:17:33.031 "data_offset": 2048, 00:17:33.031 "data_size": 63488 00:17:33.031 }, 00:17:33.031 { 00:17:33.031 "name": "BaseBdev3", 00:17:33.031 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:33.031 "is_configured": true, 00:17:33.031 "data_offset": 2048, 00:17:33.031 "data_size": 63488 00:17:33.031 }, 00:17:33.031 { 00:17:33.031 "name": "BaseBdev4", 00:17:33.031 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:33.031 "is_configured": true, 00:17:33.031 "data_offset": 2048, 00:17:33.031 "data_size": 63488 00:17:33.031 } 00:17:33.031 ] 00:17:33.031 }' 00:17:33.031 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.031 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.596 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.596 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.596 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.596 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.596 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.596 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.596 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.596 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.596 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.596 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.596 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.596 "name": "raid_bdev1", 00:17:33.596 "uuid": "84cdbb00-4dde-4ef5-9a6c-1f0bc5f8759d", 00:17:33.596 "strip_size_kb": 0, 00:17:33.596 "state": "online", 00:17:33.596 "raid_level": "raid1", 00:17:33.597 "superblock": true, 00:17:33.597 "num_base_bdevs": 4, 00:17:33.597 "num_base_bdevs_discovered": 2, 00:17:33.597 "num_base_bdevs_operational": 2, 00:17:33.597 "base_bdevs_list": [ 00:17:33.597 { 00:17:33.597 "name": null, 00:17:33.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.597 "is_configured": false, 00:17:33.597 "data_offset": 0, 00:17:33.597 "data_size": 63488 00:17:33.597 }, 00:17:33.597 { 00:17:33.597 "name": null, 00:17:33.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.597 "is_configured": false, 00:17:33.597 "data_offset": 2048, 00:17:33.597 "data_size": 63488 00:17:33.597 }, 00:17:33.597 { 00:17:33.597 "name": "BaseBdev3", 00:17:33.597 "uuid": "c91e5368-be9a-5d54-a6f0-c22a278b42ef", 00:17:33.597 "is_configured": true, 00:17:33.597 "data_offset": 2048, 00:17:33.597 "data_size": 63488 00:17:33.597 }, 00:17:33.597 { 00:17:33.597 "name": "BaseBdev4", 00:17:33.597 "uuid": "6501e40a-fe68-5117-ae82-5040cad3d9c7", 00:17:33.597 "is_configured": true, 00:17:33.597 "data_offset": 2048, 00:17:33.597 "data_size": 63488 00:17:33.597 } 00:17:33.597 ] 00:17:33.597 }' 00:17:33.597 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.597 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:33.597 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.597 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:33.597 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78411 00:17:33.597 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' -z 78411 ']' 00:17:33.597 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # kill -0 78411 00:17:33.597 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # uname 00:17:33.597 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:17:33.597 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 78411 00:17:33.597 killing process with pid 78411 00:17:33.597 Received shutdown signal, test time was about 60.000000 seconds 00:17:33.597 00:17:33.597 Latency(us) 00:17:33.597 [2024-11-27T08:49:30.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.597 [2024-11-27T08:49:30.357Z] =================================================================================================================== 00:17:33.597 [2024-11-27T08:49:30.357Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:33.597 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:17:33.597 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:17:33.597 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # echo 'killing process with pid 78411' 00:17:33.597 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # kill 78411 00:17:33.597 [2024-11-27 08:49:30.339366] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.597 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@975 -- # wait 78411 00:17:33.597 [2024-11-27 08:49:30.339544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.597 [2024-11-27 08:49:30.339653] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.597 [2024-11-27 08:49:30.339670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:34.162 [2024-11-27 08:49:30.799093] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:35.536 ************************************ 00:17:35.536 END TEST raid_rebuild_test_sb 00:17:35.536 ************************************ 00:17:35.536 00:17:35.536 real 0m29.669s 00:17:35.536 user 0m36.207s 00:17:35.536 sys 0m4.359s 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # xtrace_disable 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.536 08:49:31 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:17:35.536 08:49:31 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:17:35.536 08:49:31 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:17:35.536 08:49:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:35.536 ************************************ 00:17:35.536 START TEST raid_rebuild_test_io 00:17:35.536 ************************************ 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # raid_rebuild_test raid1 4 false true true 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79209 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79209 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@832 -- # '[' -z 79209 ']' 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local max_retries=100 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@841 -- # xtrace_disable 00:17:35.536 08:49:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.536 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:35.536 Zero copy mechanism will not be used. 00:17:35.536 [2024-11-27 08:49:32.046836] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:17:35.536 [2024-11-27 08:49:32.047027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79209 ] 00:17:35.536 [2024-11-27 08:49:32.228809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.795 [2024-11-27 08:49:32.379047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.054 [2024-11-27 08:49:32.602313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.054 [2024-11-27 08:49:32.602693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.311 08:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:17:36.311 08:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # return 0 00:17:36.311 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.311 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:36.311 08:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.311 08:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.311 BaseBdev1_malloc 00:17:36.311 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.311 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:36.311 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.311 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.311 [2024-11-27 08:49:33.048850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:36.311 [2024-11-27 08:49:33.049116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.311 [2024-11-27 08:49:33.049280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:36.311 [2024-11-27 08:49:33.049430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.311 [2024-11-27 08:49:33.052520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.311 [2024-11-27 08:49:33.052700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:36.311 BaseBdev1 00:17:36.311 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.311 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.311 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:36.311 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.311 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.569 BaseBdev2_malloc 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.570 [2024-11-27 08:49:33.104095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:36.570 [2024-11-27 08:49:33.104200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.570 [2024-11-27 08:49:33.104230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:36.570 [2024-11-27 08:49:33.104251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.570 [2024-11-27 08:49:33.107375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.570 [2024-11-27 08:49:33.107451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:36.570 BaseBdev2 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.570 BaseBdev3_malloc 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.570 [2024-11-27 08:49:33.171993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:36.570 [2024-11-27 08:49:33.172236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.570 [2024-11-27 08:49:33.172317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:36.570 [2024-11-27 08:49:33.172576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.570 [2024-11-27 08:49:33.175631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.570 [2024-11-27 08:49:33.175831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:36.570 BaseBdev3 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.570 BaseBdev4_malloc 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.570 [2024-11-27 08:49:33.230269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:36.570 [2024-11-27 08:49:33.230511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.570 [2024-11-27 08:49:33.230587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:36.570 [2024-11-27 08:49:33.230715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.570 [2024-11-27 08:49:33.233624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.570 [2024-11-27 08:49:33.233807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:36.570 BaseBdev4 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.570 spare_malloc 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.570 spare_delay 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.570 [2024-11-27 08:49:33.288659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:36.570 [2024-11-27 08:49:33.288889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.570 [2024-11-27 08:49:33.288929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:36.570 [2024-11-27 08:49:33.288949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.570 [2024-11-27 08:49:33.291995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.570 [2024-11-27 08:49:33.292046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:36.570 spare 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.570 [2024-11-27 08:49:33.296743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:36.570 [2024-11-27 08:49:33.299455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.570 [2024-11-27 08:49:33.299699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:36.570 [2024-11-27 08:49:33.299825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:36.570 [2024-11-27 08:49:33.300004] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:36.570 [2024-11-27 08:49:33.300133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:36.570 [2024-11-27 08:49:33.300564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:36.570 [2024-11-27 08:49:33.300923] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:36.570 [2024-11-27 08:49:33.301052] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:36.570 [2024-11-27 08:49:33.301326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.570 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.571 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:36.571 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.571 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.571 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.571 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.571 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.571 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.571 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.571 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.571 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.571 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.571 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.571 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.571 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.571 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.828 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.828 "name": "raid_bdev1", 00:17:36.828 "uuid": "d6c94dbd-8248-4fc8-ae8a-4741efa33a06", 00:17:36.828 "strip_size_kb": 0, 00:17:36.828 "state": "online", 00:17:36.828 "raid_level": "raid1", 00:17:36.828 "superblock": false, 00:17:36.829 "num_base_bdevs": 4, 00:17:36.829 "num_base_bdevs_discovered": 4, 00:17:36.829 "num_base_bdevs_operational": 4, 00:17:36.829 "base_bdevs_list": [ 00:17:36.829 { 00:17:36.829 "name": "BaseBdev1", 00:17:36.829 "uuid": "36fb14f3-0348-545f-9088-da8a3498b966", 00:17:36.829 "is_configured": true, 00:17:36.829 "data_offset": 0, 00:17:36.829 "data_size": 65536 00:17:36.829 }, 00:17:36.829 { 00:17:36.829 "name": "BaseBdev2", 00:17:36.829 "uuid": "a022f04f-54f8-5701-acbf-b3de9ecd089e", 00:17:36.829 "is_configured": true, 00:17:36.829 "data_offset": 0, 00:17:36.829 "data_size": 65536 00:17:36.829 }, 00:17:36.829 { 00:17:36.829 "name": "BaseBdev3", 00:17:36.829 "uuid": "ecceed37-fdf4-5683-a459-27f667628b15", 00:17:36.829 "is_configured": true, 00:17:36.829 "data_offset": 0, 00:17:36.829 "data_size": 65536 00:17:36.829 }, 00:17:36.829 { 00:17:36.829 "name": "BaseBdev4", 00:17:36.829 "uuid": "0a810f9c-e182-5422-9e44-41d55da80687", 00:17:36.829 "is_configured": true, 00:17:36.829 "data_offset": 0, 00:17:36.829 "data_size": 65536 00:17:36.829 } 00:17:36.829 ] 00:17:36.829 }' 00:17:36.829 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.829 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.087 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:37.087 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:37.087 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.087 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.087 [2024-11-27 08:49:33.805929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.087 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.346 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:37.346 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.346 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.346 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:37.346 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.346 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.346 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:37.346 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:37.346 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:37.346 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.346 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:37.346 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.346 [2024-11-27 08:49:33.909440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.346 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.346 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.347 "name": "raid_bdev1", 00:17:37.347 "uuid": "d6c94dbd-8248-4fc8-ae8a-4741efa33a06", 00:17:37.347 "strip_size_kb": 0, 00:17:37.347 "state": "online", 00:17:37.347 "raid_level": "raid1", 00:17:37.347 "superblock": false, 00:17:37.347 "num_base_bdevs": 4, 00:17:37.347 "num_base_bdevs_discovered": 3, 00:17:37.347 "num_base_bdevs_operational": 3, 00:17:37.347 "base_bdevs_list": [ 00:17:37.347 { 00:17:37.347 "name": null, 00:17:37.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.347 "is_configured": false, 00:17:37.347 "data_offset": 0, 00:17:37.347 "data_size": 65536 00:17:37.347 }, 00:17:37.347 { 00:17:37.347 "name": "BaseBdev2", 00:17:37.347 "uuid": "a022f04f-54f8-5701-acbf-b3de9ecd089e", 00:17:37.347 "is_configured": true, 00:17:37.347 "data_offset": 0, 00:17:37.347 "data_size": 65536 00:17:37.347 }, 00:17:37.347 { 00:17:37.347 "name": "BaseBdev3", 00:17:37.347 "uuid": "ecceed37-fdf4-5683-a459-27f667628b15", 00:17:37.347 "is_configured": true, 00:17:37.347 "data_offset": 0, 00:17:37.347 "data_size": 65536 00:17:37.347 }, 00:17:37.347 { 00:17:37.347 "name": "BaseBdev4", 00:17:37.347 "uuid": "0a810f9c-e182-5422-9e44-41d55da80687", 00:17:37.347 "is_configured": true, 00:17:37.347 "data_offset": 0, 00:17:37.347 "data_size": 65536 00:17:37.347 } 00:17:37.347 ] 00:17:37.347 }' 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.347 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.347 [2024-11-27 08:49:34.042250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:37.347 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:37.347 Zero copy mechanism will not be used. 00:17:37.347 Running I/O for 60 seconds... 00:17:37.913 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:37.913 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.913 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.913 [2024-11-27 08:49:34.436216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.913 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.913 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:37.913 [2024-11-27 08:49:34.517277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:37.913 [2024-11-27 08:49:34.520186] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.913 [2024-11-27 08:49:34.642620] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:37.913 [2024-11-27 08:49:34.643412] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:38.171 [2024-11-27 08:49:34.784979] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:38.429 147.00 IOPS, 441.00 MiB/s [2024-11-27T08:49:35.189Z] [2024-11-27 08:49:35.056681] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:38.688 [2024-11-27 08:49:35.186885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:38.688 [2024-11-27 08:49:35.187996] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:38.946 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.946 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.946 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.946 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.946 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.946 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.946 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.946 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.946 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.946 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.946 [2024-11-27 08:49:35.544869] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:38.946 [2024-11-27 08:49:35.546986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:38.946 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.946 "name": "raid_bdev1", 00:17:38.946 "uuid": "d6c94dbd-8248-4fc8-ae8a-4741efa33a06", 00:17:38.946 "strip_size_kb": 0, 00:17:38.946 "state": "online", 00:17:38.946 "raid_level": "raid1", 00:17:38.946 "superblock": false, 00:17:38.946 "num_base_bdevs": 4, 00:17:38.947 "num_base_bdevs_discovered": 4, 00:17:38.947 "num_base_bdevs_operational": 4, 00:17:38.947 "process": { 00:17:38.947 "type": "rebuild", 00:17:38.947 "target": "spare", 00:17:38.947 "progress": { 00:17:38.947 "blocks": 12288, 00:17:38.947 "percent": 18 00:17:38.947 } 00:17:38.947 }, 00:17:38.947 "base_bdevs_list": [ 00:17:38.947 { 00:17:38.947 "name": "spare", 00:17:38.947 "uuid": "6a846449-ec2b-5dbe-8a6a-d18d518d7e8c", 00:17:38.947 "is_configured": true, 00:17:38.947 "data_offset": 0, 00:17:38.947 "data_size": 65536 00:17:38.947 }, 00:17:38.947 { 00:17:38.947 "name": "BaseBdev2", 00:17:38.947 "uuid": "a022f04f-54f8-5701-acbf-b3de9ecd089e", 00:17:38.947 "is_configured": true, 00:17:38.947 "data_offset": 0, 00:17:38.947 "data_size": 65536 00:17:38.947 }, 00:17:38.947 { 00:17:38.947 "name": "BaseBdev3", 00:17:38.947 "uuid": "ecceed37-fdf4-5683-a459-27f667628b15", 00:17:38.947 "is_configured": true, 00:17:38.947 "data_offset": 0, 00:17:38.947 "data_size": 65536 00:17:38.947 }, 00:17:38.947 { 00:17:38.947 "name": "BaseBdev4", 00:17:38.947 "uuid": "0a810f9c-e182-5422-9e44-41d55da80687", 00:17:38.947 "is_configured": true, 00:17:38.947 "data_offset": 0, 00:17:38.947 "data_size": 65536 00:17:38.947 } 00:17:38.947 ] 00:17:38.947 }' 00:17:38.947 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.947 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.947 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.947 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.947 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:38.947 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.947 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.947 [2024-11-27 08:49:35.669591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.947 [2024-11-27 08:49:35.677692] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:38.947 [2024-11-27 08:49:35.678565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:39.206 [2024-11-27 08:49:35.781945] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:39.206 [2024-11-27 08:49:35.795935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.206 [2024-11-27 08:49:35.796155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.206 [2024-11-27 08:49:35.796194] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:39.206 [2024-11-27 08:49:35.839648] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.206 "name": "raid_bdev1", 00:17:39.206 "uuid": "d6c94dbd-8248-4fc8-ae8a-4741efa33a06", 00:17:39.206 "strip_size_kb": 0, 00:17:39.206 "state": "online", 00:17:39.206 "raid_level": "raid1", 00:17:39.206 "superblock": false, 00:17:39.206 "num_base_bdevs": 4, 00:17:39.206 "num_base_bdevs_discovered": 3, 00:17:39.206 "num_base_bdevs_operational": 3, 00:17:39.206 "base_bdevs_list": [ 00:17:39.206 { 00:17:39.206 "name": null, 00:17:39.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.206 "is_configured": false, 00:17:39.206 "data_offset": 0, 00:17:39.206 "data_size": 65536 00:17:39.206 }, 00:17:39.206 { 00:17:39.206 "name": "BaseBdev2", 00:17:39.206 "uuid": "a022f04f-54f8-5701-acbf-b3de9ecd089e", 00:17:39.206 "is_configured": true, 00:17:39.206 "data_offset": 0, 00:17:39.206 "data_size": 65536 00:17:39.206 }, 00:17:39.206 { 00:17:39.206 "name": "BaseBdev3", 00:17:39.206 "uuid": "ecceed37-fdf4-5683-a459-27f667628b15", 00:17:39.206 "is_configured": true, 00:17:39.206 "data_offset": 0, 00:17:39.206 "data_size": 65536 00:17:39.206 }, 00:17:39.206 { 00:17:39.206 "name": "BaseBdev4", 00:17:39.206 "uuid": "0a810f9c-e182-5422-9e44-41d55da80687", 00:17:39.206 "is_configured": true, 00:17:39.206 "data_offset": 0, 00:17:39.206 "data_size": 65536 00:17:39.206 } 00:17:39.206 ] 00:17:39.206 }' 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.206 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.725 109.50 IOPS, 328.50 MiB/s [2024-11-27T08:49:36.485Z] 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.725 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.725 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.725 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.725 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.725 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.725 08:49:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.725 08:49:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.725 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.725 08:49:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.725 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.725 "name": "raid_bdev1", 00:17:39.725 "uuid": "d6c94dbd-8248-4fc8-ae8a-4741efa33a06", 00:17:39.725 "strip_size_kb": 0, 00:17:39.725 "state": "online", 00:17:39.725 "raid_level": "raid1", 00:17:39.725 "superblock": false, 00:17:39.725 "num_base_bdevs": 4, 00:17:39.725 "num_base_bdevs_discovered": 3, 00:17:39.725 "num_base_bdevs_operational": 3, 00:17:39.725 "base_bdevs_list": [ 00:17:39.725 { 00:17:39.725 "name": null, 00:17:39.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.725 "is_configured": false, 00:17:39.725 "data_offset": 0, 00:17:39.725 "data_size": 65536 00:17:39.725 }, 00:17:39.725 { 00:17:39.725 "name": "BaseBdev2", 00:17:39.725 "uuid": "a022f04f-54f8-5701-acbf-b3de9ecd089e", 00:17:39.725 "is_configured": true, 00:17:39.725 "data_offset": 0, 00:17:39.725 "data_size": 65536 00:17:39.725 }, 00:17:39.725 { 00:17:39.725 "name": "BaseBdev3", 00:17:39.725 "uuid": "ecceed37-fdf4-5683-a459-27f667628b15", 00:17:39.725 "is_configured": true, 00:17:39.725 "data_offset": 0, 00:17:39.725 "data_size": 65536 00:17:39.725 }, 00:17:39.725 { 00:17:39.725 "name": "BaseBdev4", 00:17:39.725 "uuid": "0a810f9c-e182-5422-9e44-41d55da80687", 00:17:39.725 "is_configured": true, 00:17:39.725 "data_offset": 0, 00:17:39.725 "data_size": 65536 00:17:39.725 } 00:17:39.725 ] 00:17:39.725 }' 00:17:39.725 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.725 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.725 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.984 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.984 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:39.984 08:49:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.984 08:49:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.984 [2024-11-27 08:49:36.535281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.984 08:49:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.984 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:39.984 [2024-11-27 08:49:36.608735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:39.984 [2024-11-27 08:49:36.611606] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:39.984 [2024-11-27 08:49:36.723997] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:39.984 [2024-11-27 08:49:36.724872] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:40.254 [2024-11-27 08:49:36.950608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:40.254 [2024-11-27 08:49:36.951776] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:40.770 116.00 IOPS, 348.00 MiB/s [2024-11-27T08:49:37.530Z] [2024-11-27 08:49:37.303278] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:40.770 [2024-11-27 08:49:37.305388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:41.038 [2024-11-27 08:49:37.554140] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.038 "name": "raid_bdev1", 00:17:41.038 "uuid": "d6c94dbd-8248-4fc8-ae8a-4741efa33a06", 00:17:41.038 "strip_size_kb": 0, 00:17:41.038 "state": "online", 00:17:41.038 "raid_level": "raid1", 00:17:41.038 "superblock": false, 00:17:41.038 "num_base_bdevs": 4, 00:17:41.038 "num_base_bdevs_discovered": 4, 00:17:41.038 "num_base_bdevs_operational": 4, 00:17:41.038 "process": { 00:17:41.038 "type": "rebuild", 00:17:41.038 "target": "spare", 00:17:41.038 "progress": { 00:17:41.038 "blocks": 10240, 00:17:41.038 "percent": 15 00:17:41.038 } 00:17:41.038 }, 00:17:41.038 "base_bdevs_list": [ 00:17:41.038 { 00:17:41.038 "name": "spare", 00:17:41.038 "uuid": "6a846449-ec2b-5dbe-8a6a-d18d518d7e8c", 00:17:41.038 "is_configured": true, 00:17:41.038 "data_offset": 0, 00:17:41.038 "data_size": 65536 00:17:41.038 }, 00:17:41.038 { 00:17:41.038 "name": "BaseBdev2", 00:17:41.038 "uuid": "a022f04f-54f8-5701-acbf-b3de9ecd089e", 00:17:41.038 "is_configured": true, 00:17:41.038 "data_offset": 0, 00:17:41.038 "data_size": 65536 00:17:41.038 }, 00:17:41.038 { 00:17:41.038 "name": "BaseBdev3", 00:17:41.038 "uuid": "ecceed37-fdf4-5683-a459-27f667628b15", 00:17:41.038 "is_configured": true, 00:17:41.038 "data_offset": 0, 00:17:41.038 "data_size": 65536 00:17:41.038 }, 00:17:41.038 { 00:17:41.038 "name": "BaseBdev4", 00:17:41.038 "uuid": "0a810f9c-e182-5422-9e44-41d55da80687", 00:17:41.038 "is_configured": true, 00:17:41.038 "data_offset": 0, 00:17:41.038 "data_size": 65536 00:17:41.038 } 00:17:41.038 ] 00:17:41.038 }' 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.038 08:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.038 [2024-11-27 08:49:37.742173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:41.299 [2024-11-27 08:49:37.898621] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:41.299 [2024-11-27 08:49:37.898891] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:41.299 08:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.299 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:41.299 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:41.299 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.299 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.299 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.299 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.299 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.299 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.299 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.299 08:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.299 08:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.299 08:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.299 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.299 "name": "raid_bdev1", 00:17:41.299 "uuid": "d6c94dbd-8248-4fc8-ae8a-4741efa33a06", 00:17:41.299 "strip_size_kb": 0, 00:17:41.299 "state": "online", 00:17:41.299 "raid_level": "raid1", 00:17:41.299 "superblock": false, 00:17:41.299 "num_base_bdevs": 4, 00:17:41.299 "num_base_bdevs_discovered": 3, 00:17:41.299 "num_base_bdevs_operational": 3, 00:17:41.299 "process": { 00:17:41.299 "type": "rebuild", 00:17:41.299 "target": "spare", 00:17:41.299 "progress": { 00:17:41.299 "blocks": 14336, 00:17:41.299 "percent": 21 00:17:41.299 } 00:17:41.299 }, 00:17:41.299 "base_bdevs_list": [ 00:17:41.299 { 00:17:41.299 "name": "spare", 00:17:41.299 "uuid": "6a846449-ec2b-5dbe-8a6a-d18d518d7e8c", 00:17:41.299 "is_configured": true, 00:17:41.299 "data_offset": 0, 00:17:41.299 "data_size": 65536 00:17:41.299 }, 00:17:41.299 { 00:17:41.299 "name": null, 00:17:41.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.299 "is_configured": false, 00:17:41.299 "data_offset": 0, 00:17:41.299 "data_size": 65536 00:17:41.299 }, 00:17:41.299 { 00:17:41.299 "name": "BaseBdev3", 00:17:41.299 "uuid": "ecceed37-fdf4-5683-a459-27f667628b15", 00:17:41.299 "is_configured": true, 00:17:41.299 "data_offset": 0, 00:17:41.299 "data_size": 65536 00:17:41.299 }, 00:17:41.299 { 00:17:41.299 "name": "BaseBdev4", 00:17:41.299 "uuid": "0a810f9c-e182-5422-9e44-41d55da80687", 00:17:41.299 "is_configured": true, 00:17:41.299 "data_offset": 0, 00:17:41.299 "data_size": 65536 00:17:41.299 } 00:17:41.299 ] 00:17:41.299 }' 00:17:41.299 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.299 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.299 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=532 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.586 106.75 IOPS, 320.25 MiB/s [2024-11-27T08:49:38.346Z] 08:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.586 "name": "raid_bdev1", 00:17:41.586 "uuid": "d6c94dbd-8248-4fc8-ae8a-4741efa33a06", 00:17:41.586 "strip_size_kb": 0, 00:17:41.586 "state": "online", 00:17:41.586 "raid_level": "raid1", 00:17:41.586 "superblock": false, 00:17:41.586 "num_base_bdevs": 4, 00:17:41.586 "num_base_bdevs_discovered": 3, 00:17:41.586 "num_base_bdevs_operational": 3, 00:17:41.586 "process": { 00:17:41.586 "type": "rebuild", 00:17:41.586 "target": "spare", 00:17:41.586 "progress": { 00:17:41.586 "blocks": 16384, 00:17:41.586 "percent": 25 00:17:41.586 } 00:17:41.586 }, 00:17:41.586 "base_bdevs_list": [ 00:17:41.586 { 00:17:41.586 "name": "spare", 00:17:41.586 "uuid": "6a846449-ec2b-5dbe-8a6a-d18d518d7e8c", 00:17:41.586 "is_configured": true, 00:17:41.586 "data_offset": 0, 00:17:41.586 "data_size": 65536 00:17:41.586 }, 00:17:41.586 { 00:17:41.586 "name": null, 00:17:41.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.586 "is_configured": false, 00:17:41.586 "data_offset": 0, 00:17:41.586 "data_size": 65536 00:17:41.586 }, 00:17:41.586 { 00:17:41.586 "name": "BaseBdev3", 00:17:41.586 "uuid": "ecceed37-fdf4-5683-a459-27f667628b15", 00:17:41.586 "is_configured": true, 00:17:41.586 "data_offset": 0, 00:17:41.586 "data_size": 65536 00:17:41.586 }, 00:17:41.586 { 00:17:41.586 "name": "BaseBdev4", 00:17:41.586 "uuid": "0a810f9c-e182-5422-9e44-41d55da80687", 00:17:41.586 "is_configured": true, 00:17:41.586 "data_offset": 0, 00:17:41.586 "data_size": 65536 00:17:41.586 } 00:17:41.586 ] 00:17:41.586 }' 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.586 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.586 [2024-11-27 08:49:38.273408] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:41.844 [2024-11-27 08:49:38.477368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:41.844 [2024-11-27 08:49:38.477952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:42.411 [2024-11-27 08:49:38.916154] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:42.669 95.20 IOPS, 285.60 MiB/s [2024-11-27T08:49:39.429Z] 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.669 "name": "raid_bdev1", 00:17:42.669 "uuid": "d6c94dbd-8248-4fc8-ae8a-4741efa33a06", 00:17:42.669 "strip_size_kb": 0, 00:17:42.669 "state": "online", 00:17:42.669 "raid_level": "raid1", 00:17:42.669 "superblock": false, 00:17:42.669 "num_base_bdevs": 4, 00:17:42.669 "num_base_bdevs_discovered": 3, 00:17:42.669 "num_base_bdevs_operational": 3, 00:17:42.669 "process": { 00:17:42.669 "type": "rebuild", 00:17:42.669 "target": "spare", 00:17:42.669 "progress": { 00:17:42.669 "blocks": 32768, 00:17:42.669 "percent": 50 00:17:42.669 } 00:17:42.669 }, 00:17:42.669 "base_bdevs_list": [ 00:17:42.669 { 00:17:42.669 "name": "spare", 00:17:42.669 "uuid": "6a846449-ec2b-5dbe-8a6a-d18d518d7e8c", 00:17:42.669 "is_configured": true, 00:17:42.669 "data_offset": 0, 00:17:42.669 "data_size": 65536 00:17:42.669 }, 00:17:42.669 { 00:17:42.669 "name": null, 00:17:42.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.669 "is_configured": false, 00:17:42.669 "data_offset": 0, 00:17:42.669 "data_size": 65536 00:17:42.669 }, 00:17:42.669 { 00:17:42.669 "name": "BaseBdev3", 00:17:42.669 "uuid": "ecceed37-fdf4-5683-a459-27f667628b15", 00:17:42.669 "is_configured": true, 00:17:42.669 "data_offset": 0, 00:17:42.669 "data_size": 65536 00:17:42.669 }, 00:17:42.669 { 00:17:42.669 "name": "BaseBdev4", 00:17:42.669 "uuid": "0a810f9c-e182-5422-9e44-41d55da80687", 00:17:42.669 "is_configured": true, 00:17:42.669 "data_offset": 0, 00:17:42.669 "data_size": 65536 00:17:42.669 } 00:17:42.669 ] 00:17:42.669 }' 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.669 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:42.927 [2024-11-27 08:49:39.536826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:43.184 [2024-11-27 08:49:39.783328] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:43.700 84.83 IOPS, 254.50 MiB/s [2024-11-27T08:49:40.460Z] 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.700 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.700 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.700 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.700 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.700 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.700 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.700 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.700 08:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.700 08:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.700 08:49:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.700 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.700 "name": "raid_bdev1", 00:17:43.700 "uuid": "d6c94dbd-8248-4fc8-ae8a-4741efa33a06", 00:17:43.700 "strip_size_kb": 0, 00:17:43.700 "state": "online", 00:17:43.700 "raid_level": "raid1", 00:17:43.700 "superblock": false, 00:17:43.700 "num_base_bdevs": 4, 00:17:43.700 "num_base_bdevs_discovered": 3, 00:17:43.700 "num_base_bdevs_operational": 3, 00:17:43.700 "process": { 00:17:43.700 "type": "rebuild", 00:17:43.700 "target": "spare", 00:17:43.700 "progress": { 00:17:43.700 "blocks": 49152, 00:17:43.700 "percent": 75 00:17:43.700 } 00:17:43.700 }, 00:17:43.700 "base_bdevs_list": [ 00:17:43.700 { 00:17:43.700 "name": "spare", 00:17:43.700 "uuid": "6a846449-ec2b-5dbe-8a6a-d18d518d7e8c", 00:17:43.700 "is_configured": true, 00:17:43.700 "data_offset": 0, 00:17:43.700 "data_size": 65536 00:17:43.700 }, 00:17:43.700 { 00:17:43.700 "name": null, 00:17:43.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.700 "is_configured": false, 00:17:43.700 "data_offset": 0, 00:17:43.701 "data_size": 65536 00:17:43.701 }, 00:17:43.701 { 00:17:43.701 "name": "BaseBdev3", 00:17:43.701 "uuid": "ecceed37-fdf4-5683-a459-27f667628b15", 00:17:43.701 "is_configured": true, 00:17:43.701 "data_offset": 0, 00:17:43.701 "data_size": 65536 00:17:43.701 }, 00:17:43.701 { 00:17:43.701 "name": "BaseBdev4", 00:17:43.701 "uuid": "0a810f9c-e182-5422-9e44-41d55da80687", 00:17:43.701 "is_configured": true, 00:17:43.701 "data_offset": 0, 00:17:43.701 "data_size": 65536 00:17:43.701 } 00:17:43.701 ] 00:17:43.701 }' 00:17:43.701 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.957 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.957 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.957 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.957 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:44.214 [2024-11-27 08:49:40.820078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:17:44.214 [2024-11-27 08:49:40.821003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:17:44.471 [2024-11-27 08:49:41.034766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:17:44.752 78.57 IOPS, 235.71 MiB/s [2024-11-27T08:49:41.512Z] [2024-11-27 08:49:41.340298] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:44.752 [2024-11-27 08:49:41.391926] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:44.752 [2024-11-27 08:49:41.395829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.061 "name": "raid_bdev1", 00:17:45.061 "uuid": "d6c94dbd-8248-4fc8-ae8a-4741efa33a06", 00:17:45.061 "strip_size_kb": 0, 00:17:45.061 "state": "online", 00:17:45.061 "raid_level": "raid1", 00:17:45.061 "superblock": false, 00:17:45.061 "num_base_bdevs": 4, 00:17:45.061 "num_base_bdevs_discovered": 3, 00:17:45.061 "num_base_bdevs_operational": 3, 00:17:45.061 "base_bdevs_list": [ 00:17:45.061 { 00:17:45.061 "name": "spare", 00:17:45.061 "uuid": "6a846449-ec2b-5dbe-8a6a-d18d518d7e8c", 00:17:45.061 "is_configured": true, 00:17:45.061 "data_offset": 0, 00:17:45.061 "data_size": 65536 00:17:45.061 }, 00:17:45.061 { 00:17:45.061 "name": null, 00:17:45.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.061 "is_configured": false, 00:17:45.061 "data_offset": 0, 00:17:45.061 "data_size": 65536 00:17:45.061 }, 00:17:45.061 { 00:17:45.061 "name": "BaseBdev3", 00:17:45.061 "uuid": "ecceed37-fdf4-5683-a459-27f667628b15", 00:17:45.061 "is_configured": true, 00:17:45.061 "data_offset": 0, 00:17:45.061 "data_size": 65536 00:17:45.061 }, 00:17:45.061 { 00:17:45.061 "name": "BaseBdev4", 00:17:45.061 "uuid": "0a810f9c-e182-5422-9e44-41d55da80687", 00:17:45.061 "is_configured": true, 00:17:45.061 "data_offset": 0, 00:17:45.061 "data_size": 65536 00:17:45.061 } 00:17:45.061 ] 00:17:45.061 }' 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.061 "name": "raid_bdev1", 00:17:45.061 "uuid": "d6c94dbd-8248-4fc8-ae8a-4741efa33a06", 00:17:45.061 "strip_size_kb": 0, 00:17:45.061 "state": "online", 00:17:45.061 "raid_level": "raid1", 00:17:45.061 "superblock": false, 00:17:45.061 "num_base_bdevs": 4, 00:17:45.061 "num_base_bdevs_discovered": 3, 00:17:45.061 "num_base_bdevs_operational": 3, 00:17:45.061 "base_bdevs_list": [ 00:17:45.061 { 00:17:45.061 "name": "spare", 00:17:45.061 "uuid": "6a846449-ec2b-5dbe-8a6a-d18d518d7e8c", 00:17:45.061 "is_configured": true, 00:17:45.061 "data_offset": 0, 00:17:45.061 "data_size": 65536 00:17:45.061 }, 00:17:45.061 { 00:17:45.061 "name": null, 00:17:45.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.061 "is_configured": false, 00:17:45.061 "data_offset": 0, 00:17:45.061 "data_size": 65536 00:17:45.061 }, 00:17:45.061 { 00:17:45.061 "name": "BaseBdev3", 00:17:45.061 "uuid": "ecceed37-fdf4-5683-a459-27f667628b15", 00:17:45.061 "is_configured": true, 00:17:45.061 "data_offset": 0, 00:17:45.061 "data_size": 65536 00:17:45.061 }, 00:17:45.061 { 00:17:45.061 "name": "BaseBdev4", 00:17:45.061 "uuid": "0a810f9c-e182-5422-9e44-41d55da80687", 00:17:45.061 "is_configured": true, 00:17:45.061 "data_offset": 0, 00:17:45.061 "data_size": 65536 00:17:45.061 } 00:17:45.061 ] 00:17:45.061 }' 00:17:45.061 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.320 "name": "raid_bdev1", 00:17:45.320 "uuid": "d6c94dbd-8248-4fc8-ae8a-4741efa33a06", 00:17:45.320 "strip_size_kb": 0, 00:17:45.320 "state": "online", 00:17:45.320 "raid_level": "raid1", 00:17:45.320 "superblock": false, 00:17:45.320 "num_base_bdevs": 4, 00:17:45.320 "num_base_bdevs_discovered": 3, 00:17:45.320 "num_base_bdevs_operational": 3, 00:17:45.320 "base_bdevs_list": [ 00:17:45.320 { 00:17:45.320 "name": "spare", 00:17:45.320 "uuid": "6a846449-ec2b-5dbe-8a6a-d18d518d7e8c", 00:17:45.320 "is_configured": true, 00:17:45.320 "data_offset": 0, 00:17:45.320 "data_size": 65536 00:17:45.320 }, 00:17:45.320 { 00:17:45.320 "name": null, 00:17:45.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.320 "is_configured": false, 00:17:45.320 "data_offset": 0, 00:17:45.320 "data_size": 65536 00:17:45.320 }, 00:17:45.320 { 00:17:45.320 "name": "BaseBdev3", 00:17:45.320 "uuid": "ecceed37-fdf4-5683-a459-27f667628b15", 00:17:45.320 "is_configured": true, 00:17:45.320 "data_offset": 0, 00:17:45.320 "data_size": 65536 00:17:45.320 }, 00:17:45.320 { 00:17:45.320 "name": "BaseBdev4", 00:17:45.320 "uuid": "0a810f9c-e182-5422-9e44-41d55da80687", 00:17:45.320 "is_configured": true, 00:17:45.320 "data_offset": 0, 00:17:45.320 "data_size": 65536 00:17:45.320 } 00:17:45.320 ] 00:17:45.320 }' 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.320 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.836 73.12 IOPS, 219.38 MiB/s [2024-11-27T08:49:42.596Z] 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.836 [2024-11-27 08:49:42.378256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:45.836 [2024-11-27 08:49:42.378298] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.836 00:17:45.836 Latency(us) 00:17:45.836 [2024-11-27T08:49:42.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.836 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:45.836 raid_bdev1 : 8.36 70.69 212.06 0.00 0.00 19037.22 303.48 121539.49 00:17:45.836 [2024-11-27T08:49:42.596Z] =================================================================================================================== 00:17:45.836 [2024-11-27T08:49:42.596Z] Total : 70.69 212.06 0.00 0.00 19037.22 303.48 121539.49 00:17:45.836 [2024-11-27 08:49:42.426145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.836 [2024-11-27 08:49:42.426448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.836 { 00:17:45.836 "results": [ 00:17:45.836 { 00:17:45.836 "job": "raid_bdev1", 00:17:45.836 "core_mask": "0x1", 00:17:45.836 "workload": "randrw", 00:17:45.836 "percentage": 50, 00:17:45.836 "status": "finished", 00:17:45.836 "queue_depth": 2, 00:17:45.836 "io_size": 3145728, 00:17:45.836 "runtime": 8.360814, 00:17:45.836 "iops": 70.68689723273356, 00:17:45.836 "mibps": 212.06069169820069, 00:17:45.836 "io_failed": 0, 00:17:45.836 "io_timeout": 0, 00:17:45.836 "avg_latency_us": 19037.21849561606, 00:17:45.836 "min_latency_us": 303.47636363636366, 00:17:45.836 "max_latency_us": 121539.4909090909 00:17:45.836 } 00:17:45.836 ], 00:17:45.836 "core_count": 1 00:17:45.836 } 00:17:45.836 [2024-11-27 08:49:42.426818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.836 [2024-11-27 08:49:42.427039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:45.836 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:46.094 /dev/nbd0 00:17:46.094 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:46.094 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:46.094 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:17:46.094 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local i 00:17:46.094 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:46.094 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:46.094 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # break 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:46.352 1+0 records in 00:17:46.352 1+0 records out 00:17:46.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329328 s, 12.4 MB/s 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # size=4096 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # return 0 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.352 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:46.610 /dev/nbd1 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local i 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # break 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:46.610 1+0 records in 00:17:46.610 1+0 records out 00:17:46.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322942 s, 12.7 MB/s 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # size=4096 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # return 0 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.610 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:46.868 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:46.868 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.868 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:46.868 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:46.868 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:46.868 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:46.868 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:47.124 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:47.124 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:47.125 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:47.382 /dev/nbd1 00:17:47.383 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:47.383 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:47.383 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:17:47.383 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local i 00:17:47.383 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:17:47.383 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:17:47.383 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:17:47.383 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # break 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.383 1+0 records in 00:17:47.383 1+0 records out 00:17:47.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378612 s, 10.8 MB/s 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # size=4096 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # return 0 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.383 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79209 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # '[' -z 79209 ']' 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # kill -0 79209 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # uname 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:17:47.950 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 79209 00:17:48.210 killing process with pid 79209 00:17:48.210 Received shutdown signal, test time was about 10.686315 seconds 00:17:48.210 00:17:48.210 Latency(us) 00:17:48.210 [2024-11-27T08:49:44.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.210 [2024-11-27T08:49:44.970Z] =================================================================================================================== 00:17:48.210 [2024-11-27T08:49:44.970Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.210 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:17:48.210 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:17:48.210 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # echo 'killing process with pid 79209' 00:17:48.210 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # kill 79209 00:17:48.210 [2024-11-27 08:49:44.731652] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:48.210 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@975 -- # wait 79209 00:17:48.468 [2024-11-27 08:49:45.129454] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:49.870 00:17:49.870 real 0m14.370s 00:17:49.870 user 0m18.729s 00:17:49.870 sys 0m1.886s 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # xtrace_disable 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.870 ************************************ 00:17:49.870 END TEST raid_rebuild_test_io 00:17:49.870 ************************************ 00:17:49.870 08:49:46 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:17:49.870 08:49:46 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:17:49.870 08:49:46 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:17:49.870 08:49:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:49.870 ************************************ 00:17:49.870 START TEST raid_rebuild_test_sb_io 00:17:49.870 ************************************ 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # raid_rebuild_test raid1 4 true true true 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:49.870 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79624 00:17:49.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79624 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@832 -- # '[' -z 79624 ']' 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local max_retries=100 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@841 -- # xtrace_disable 00:17:49.871 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.871 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:49.871 Zero copy mechanism will not be used. 00:17:49.871 [2024-11-27 08:49:46.476905] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:17:49.871 [2024-11-27 08:49:46.477098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79624 ] 00:17:50.128 [2024-11-27 08:49:46.664986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.128 [2024-11-27 08:49:46.805772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.386 [2024-11-27 08:49:47.029750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.386 [2024-11-27 08:49:47.029794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # return 0 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.952 BaseBdev1_malloc 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.952 [2024-11-27 08:49:47.498661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:50.952 [2024-11-27 08:49:47.498768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.952 [2024-11-27 08:49:47.498801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:50.952 [2024-11-27 08:49:47.498819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.952 [2024-11-27 08:49:47.501851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.952 [2024-11-27 08:49:47.502152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:50.952 BaseBdev1 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.952 BaseBdev2_malloc 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.952 [2024-11-27 08:49:47.555144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:50.952 [2024-11-27 08:49:47.555226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.952 [2024-11-27 08:49:47.555252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:50.952 [2024-11-27 08:49:47.555271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.952 [2024-11-27 08:49:47.558323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.952 [2024-11-27 08:49:47.558413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:50.952 BaseBdev2 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.952 BaseBdev3_malloc 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.952 [2024-11-27 08:49:47.620525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:50.952 [2024-11-27 08:49:47.620619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.952 [2024-11-27 08:49:47.620653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:50.952 [2024-11-27 08:49:47.620672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.952 [2024-11-27 08:49:47.623874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.952 [2024-11-27 08:49:47.623927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:50.952 BaseBdev3 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.952 BaseBdev4_malloc 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.952 [2024-11-27 08:49:47.681521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:50.952 [2024-11-27 08:49:47.681620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.952 [2024-11-27 08:49:47.681656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:50.952 [2024-11-27 08:49:47.681676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.952 [2024-11-27 08:49:47.684903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.952 [2024-11-27 08:49:47.684958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:50.952 BaseBdev4 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.952 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.211 spare_malloc 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.211 spare_delay 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.211 [2024-11-27 08:49:47.750993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:51.211 [2024-11-27 08:49:47.751096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.211 [2024-11-27 08:49:47.751132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:51.211 [2024-11-27 08:49:47.751153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.211 [2024-11-27 08:49:47.754205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.211 [2024-11-27 08:49:47.754480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:51.211 spare 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.211 [2024-11-27 08:49:47.763206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.211 [2024-11-27 08:49:47.765850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.211 [2024-11-27 08:49:47.766134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:51.211 [2024-11-27 08:49:47.766232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:51.211 [2024-11-27 08:49:47.766535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:51.211 [2024-11-27 08:49:47.766566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:51.211 [2024-11-27 08:49:47.766938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:51.211 [2024-11-27 08:49:47.767203] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:51.211 [2024-11-27 08:49:47.767222] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:51.211 [2024-11-27 08:49:47.767484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.211 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.212 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.212 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.212 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.212 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.212 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.212 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.212 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.212 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.212 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.212 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.212 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.212 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.212 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.212 "name": "raid_bdev1", 00:17:51.212 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:17:51.212 "strip_size_kb": 0, 00:17:51.212 "state": "online", 00:17:51.212 "raid_level": "raid1", 00:17:51.212 "superblock": true, 00:17:51.212 "num_base_bdevs": 4, 00:17:51.212 "num_base_bdevs_discovered": 4, 00:17:51.212 "num_base_bdevs_operational": 4, 00:17:51.212 "base_bdevs_list": [ 00:17:51.212 { 00:17:51.212 "name": "BaseBdev1", 00:17:51.212 "uuid": "fd70656b-bd68-5f88-b661-b7768e8e3eab", 00:17:51.212 "is_configured": true, 00:17:51.212 "data_offset": 2048, 00:17:51.212 "data_size": 63488 00:17:51.212 }, 00:17:51.212 { 00:17:51.212 "name": "BaseBdev2", 00:17:51.212 "uuid": "de0187e7-d0e6-5a84-a27a-95cd66d132b3", 00:17:51.212 "is_configured": true, 00:17:51.212 "data_offset": 2048, 00:17:51.212 "data_size": 63488 00:17:51.212 }, 00:17:51.212 { 00:17:51.212 "name": "BaseBdev3", 00:17:51.212 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:17:51.212 "is_configured": true, 00:17:51.212 "data_offset": 2048, 00:17:51.212 "data_size": 63488 00:17:51.212 }, 00:17:51.212 { 00:17:51.212 "name": "BaseBdev4", 00:17:51.212 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:17:51.212 "is_configured": true, 00:17:51.212 "data_offset": 2048, 00:17:51.212 "data_size": 63488 00:17:51.212 } 00:17:51.212 ] 00:17:51.212 }' 00:17:51.212 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.212 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.779 [2024-11-27 08:49:48.312094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.779 [2024-11-27 08:49:48.415597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.779 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.780 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.780 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.780 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.780 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.780 "name": "raid_bdev1", 00:17:51.780 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:17:51.780 "strip_size_kb": 0, 00:17:51.780 "state": "online", 00:17:51.780 "raid_level": "raid1", 00:17:51.780 "superblock": true, 00:17:51.780 "num_base_bdevs": 4, 00:17:51.780 "num_base_bdevs_discovered": 3, 00:17:51.780 "num_base_bdevs_operational": 3, 00:17:51.780 "base_bdevs_list": [ 00:17:51.780 { 00:17:51.780 "name": null, 00:17:51.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.780 "is_configured": false, 00:17:51.780 "data_offset": 0, 00:17:51.780 "data_size": 63488 00:17:51.780 }, 00:17:51.780 { 00:17:51.780 "name": "BaseBdev2", 00:17:51.780 "uuid": "de0187e7-d0e6-5a84-a27a-95cd66d132b3", 00:17:51.780 "is_configured": true, 00:17:51.780 "data_offset": 2048, 00:17:51.780 "data_size": 63488 00:17:51.780 }, 00:17:51.780 { 00:17:51.780 "name": "BaseBdev3", 00:17:51.780 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:17:51.780 "is_configured": true, 00:17:51.780 "data_offset": 2048, 00:17:51.780 "data_size": 63488 00:17:51.780 }, 00:17:51.780 { 00:17:51.780 "name": "BaseBdev4", 00:17:51.780 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:17:51.780 "is_configured": true, 00:17:51.780 "data_offset": 2048, 00:17:51.780 "data_size": 63488 00:17:51.780 } 00:17:51.780 ] 00:17:51.780 }' 00:17:51.780 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.780 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.038 [2024-11-27 08:49:48.544610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:52.038 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:52.038 Zero copy mechanism will not be used. 00:17:52.038 Running I/O for 60 seconds... 00:17:52.296 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:52.296 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.296 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.296 [2024-11-27 08:49:48.957696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:52.296 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.296 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:52.567 [2024-11-27 08:49:49.060076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:52.567 [2024-11-27 08:49:49.063206] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:52.567 [2024-11-27 08:49:49.195400] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:52.567 [2024-11-27 08:49:49.196538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:52.826 [2024-11-27 08:49:49.400993] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:52.826 [2024-11-27 08:49:49.402138] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:53.084 139.00 IOPS, 417.00 MiB/s [2024-11-27T08:49:49.844Z] [2024-11-27 08:49:49.744036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:53.084 [2024-11-27 08:49:49.746702] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:53.342 [2024-11-27 08:49:50.009977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:53.342 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.342 [2024-11-27 08:49:50.011387] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:53.342 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.342 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.342 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.342 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.342 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.342 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.342 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.342 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.342 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.342 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.342 "name": "raid_bdev1", 00:17:53.342 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:17:53.342 "strip_size_kb": 0, 00:17:53.342 "state": "online", 00:17:53.342 "raid_level": "raid1", 00:17:53.342 "superblock": true, 00:17:53.342 "num_base_bdevs": 4, 00:17:53.342 "num_base_bdevs_discovered": 4, 00:17:53.342 "num_base_bdevs_operational": 4, 00:17:53.342 "process": { 00:17:53.342 "type": "rebuild", 00:17:53.342 "target": "spare", 00:17:53.342 "progress": { 00:17:53.342 "blocks": 10240, 00:17:53.342 "percent": 16 00:17:53.342 } 00:17:53.342 }, 00:17:53.342 "base_bdevs_list": [ 00:17:53.342 { 00:17:53.342 "name": "spare", 00:17:53.342 "uuid": "6461320f-2c86-5a41-9fb2-c624669234fd", 00:17:53.342 "is_configured": true, 00:17:53.342 "data_offset": 2048, 00:17:53.342 "data_size": 63488 00:17:53.342 }, 00:17:53.342 { 00:17:53.342 "name": "BaseBdev2", 00:17:53.342 "uuid": "de0187e7-d0e6-5a84-a27a-95cd66d132b3", 00:17:53.342 "is_configured": true, 00:17:53.342 "data_offset": 2048, 00:17:53.342 "data_size": 63488 00:17:53.342 }, 00:17:53.342 { 00:17:53.342 "name": "BaseBdev3", 00:17:53.342 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:17:53.342 "is_configured": true, 00:17:53.342 "data_offset": 2048, 00:17:53.342 "data_size": 63488 00:17:53.342 }, 00:17:53.342 { 00:17:53.342 "name": "BaseBdev4", 00:17:53.342 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:17:53.342 "is_configured": true, 00:17:53.342 "data_offset": 2048, 00:17:53.342 "data_size": 63488 00:17:53.342 } 00:17:53.342 ] 00:17:53.342 }' 00:17:53.342 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.601 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.601 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.601 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.601 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:53.601 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.601 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.601 [2024-11-27 08:49:50.171107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.859 [2024-11-27 08:49:50.358753] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:53.859 [2024-11-27 08:49:50.375954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.859 [2024-11-27 08:49:50.376026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.859 [2024-11-27 08:49:50.376055] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:53.859 [2024-11-27 08:49:50.419181] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.859 "name": "raid_bdev1", 00:17:53.859 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:17:53.859 "strip_size_kb": 0, 00:17:53.859 "state": "online", 00:17:53.859 "raid_level": "raid1", 00:17:53.859 "superblock": true, 00:17:53.859 "num_base_bdevs": 4, 00:17:53.859 "num_base_bdevs_discovered": 3, 00:17:53.859 "num_base_bdevs_operational": 3, 00:17:53.859 "base_bdevs_list": [ 00:17:53.859 { 00:17:53.859 "name": null, 00:17:53.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.859 "is_configured": false, 00:17:53.859 "data_offset": 0, 00:17:53.859 "data_size": 63488 00:17:53.859 }, 00:17:53.859 { 00:17:53.859 "name": "BaseBdev2", 00:17:53.859 "uuid": "de0187e7-d0e6-5a84-a27a-95cd66d132b3", 00:17:53.859 "is_configured": true, 00:17:53.859 "data_offset": 2048, 00:17:53.859 "data_size": 63488 00:17:53.859 }, 00:17:53.859 { 00:17:53.859 "name": "BaseBdev3", 00:17:53.859 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:17:53.859 "is_configured": true, 00:17:53.859 "data_offset": 2048, 00:17:53.859 "data_size": 63488 00:17:53.859 }, 00:17:53.859 { 00:17:53.859 "name": "BaseBdev4", 00:17:53.859 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:17:53.859 "is_configured": true, 00:17:53.859 "data_offset": 2048, 00:17:53.859 "data_size": 63488 00:17:53.859 } 00:17:53.859 ] 00:17:53.859 }' 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.859 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.422 100.00 IOPS, 300.00 MiB/s [2024-11-27T08:49:51.182Z] 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:54.422 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.422 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:54.422 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:54.422 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.422 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.422 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.422 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.422 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.422 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.422 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.422 "name": "raid_bdev1", 00:17:54.422 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:17:54.422 "strip_size_kb": 0, 00:17:54.422 "state": "online", 00:17:54.422 "raid_level": "raid1", 00:17:54.422 "superblock": true, 00:17:54.422 "num_base_bdevs": 4, 00:17:54.422 "num_base_bdevs_discovered": 3, 00:17:54.422 "num_base_bdevs_operational": 3, 00:17:54.422 "base_bdevs_list": [ 00:17:54.422 { 00:17:54.422 "name": null, 00:17:54.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.422 "is_configured": false, 00:17:54.422 "data_offset": 0, 00:17:54.422 "data_size": 63488 00:17:54.422 }, 00:17:54.422 { 00:17:54.422 "name": "BaseBdev2", 00:17:54.422 "uuid": "de0187e7-d0e6-5a84-a27a-95cd66d132b3", 00:17:54.422 "is_configured": true, 00:17:54.422 "data_offset": 2048, 00:17:54.422 "data_size": 63488 00:17:54.422 }, 00:17:54.422 { 00:17:54.422 "name": "BaseBdev3", 00:17:54.422 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:17:54.422 "is_configured": true, 00:17:54.422 "data_offset": 2048, 00:17:54.422 "data_size": 63488 00:17:54.422 }, 00:17:54.422 { 00:17:54.422 "name": "BaseBdev4", 00:17:54.422 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:17:54.422 "is_configured": true, 00:17:54.422 "data_offset": 2048, 00:17:54.422 "data_size": 63488 00:17:54.422 } 00:17:54.422 ] 00:17:54.422 }' 00:17:54.422 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.422 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:54.422 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.422 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:54.422 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:54.422 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.422 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.422 [2024-11-27 08:49:51.164949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:54.678 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.678 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:54.678 [2024-11-27 08:49:51.246376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:54.678 [2024-11-27 08:49:51.249272] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:54.678 [2024-11-27 08:49:51.381942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:54.678 [2024-11-27 08:49:51.382940] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:54.935 120.67 IOPS, 362.00 MiB/s [2024-11-27T08:49:51.695Z] [2024-11-27 08:49:51.588773] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:54.936 [2024-11-27 08:49:51.589955] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:55.501 [2024-11-27 08:49:51.954325] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:55.501 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.501 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.501 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.501 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.501 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.501 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.501 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.501 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.501 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:55.501 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.759 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.759 "name": "raid_bdev1", 00:17:55.759 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:17:55.759 "strip_size_kb": 0, 00:17:55.759 "state": "online", 00:17:55.759 "raid_level": "raid1", 00:17:55.759 "superblock": true, 00:17:55.759 "num_base_bdevs": 4, 00:17:55.759 "num_base_bdevs_discovered": 4, 00:17:55.759 "num_base_bdevs_operational": 4, 00:17:55.759 "process": { 00:17:55.759 "type": "rebuild", 00:17:55.759 "target": "spare", 00:17:55.759 "progress": { 00:17:55.759 "blocks": 10240, 00:17:55.759 "percent": 16 00:17:55.759 } 00:17:55.759 }, 00:17:55.759 "base_bdevs_list": [ 00:17:55.759 { 00:17:55.759 "name": "spare", 00:17:55.759 "uuid": "6461320f-2c86-5a41-9fb2-c624669234fd", 00:17:55.759 "is_configured": true, 00:17:55.759 "data_offset": 2048, 00:17:55.759 "data_size": 63488 00:17:55.759 }, 00:17:55.759 { 00:17:55.759 "name": "BaseBdev2", 00:17:55.759 "uuid": "de0187e7-d0e6-5a84-a27a-95cd66d132b3", 00:17:55.759 "is_configured": true, 00:17:55.759 "data_offset": 2048, 00:17:55.759 "data_size": 63488 00:17:55.759 }, 00:17:55.759 { 00:17:55.759 "name": "BaseBdev3", 00:17:55.759 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:17:55.759 "is_configured": true, 00:17:55.759 "data_offset": 2048, 00:17:55.759 "data_size": 63488 00:17:55.759 }, 00:17:55.759 { 00:17:55.759 "name": "BaseBdev4", 00:17:55.759 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:17:55.759 "is_configured": true, 00:17:55.759 "data_offset": 2048, 00:17:55.759 "data_size": 63488 00:17:55.759 } 00:17:55.759 ] 00:17:55.759 }' 00:17:55.759 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.759 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.759 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.759 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.759 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:55.759 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:55.759 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:55.759 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:55.759 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:55.759 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:55.759 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:55.759 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.759 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:55.759 [2024-11-27 08:49:52.386097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:56.018 109.75 IOPS, 329.25 MiB/s [2024-11-27T08:49:52.778Z] [2024-11-27 08:49:52.594017] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:56.018 [2024-11-27 08:49:52.594124] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.018 "name": "raid_bdev1", 00:17:56.018 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:17:56.018 "strip_size_kb": 0, 00:17:56.018 "state": "online", 00:17:56.018 "raid_level": "raid1", 00:17:56.018 "superblock": true, 00:17:56.018 "num_base_bdevs": 4, 00:17:56.018 "num_base_bdevs_discovered": 3, 00:17:56.018 "num_base_bdevs_operational": 3, 00:17:56.018 "process": { 00:17:56.018 "type": "rebuild", 00:17:56.018 "target": "spare", 00:17:56.018 "progress": { 00:17:56.018 "blocks": 14336, 00:17:56.018 "percent": 22 00:17:56.018 } 00:17:56.018 }, 00:17:56.018 "base_bdevs_list": [ 00:17:56.018 { 00:17:56.018 "name": "spare", 00:17:56.018 "uuid": "6461320f-2c86-5a41-9fb2-c624669234fd", 00:17:56.018 "is_configured": true, 00:17:56.018 "data_offset": 2048, 00:17:56.018 "data_size": 63488 00:17:56.018 }, 00:17:56.018 { 00:17:56.018 "name": null, 00:17:56.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.018 "is_configured": false, 00:17:56.018 "data_offset": 0, 00:17:56.018 "data_size": 63488 00:17:56.018 }, 00:17:56.018 { 00:17:56.018 "name": "BaseBdev3", 00:17:56.018 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:17:56.018 "is_configured": true, 00:17:56.018 "data_offset": 2048, 00:17:56.018 "data_size": 63488 00:17:56.018 }, 00:17:56.018 { 00:17:56.018 "name": "BaseBdev4", 00:17:56.018 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:17:56.018 "is_configured": true, 00:17:56.018 "data_offset": 2048, 00:17:56.018 "data_size": 63488 00:17:56.018 } 00:17:56.018 ] 00:17:56.018 }' 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=546 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.018 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.277 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.277 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.277 "name": "raid_bdev1", 00:17:56.277 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:17:56.277 "strip_size_kb": 0, 00:17:56.277 "state": "online", 00:17:56.277 "raid_level": "raid1", 00:17:56.277 "superblock": true, 00:17:56.277 "num_base_bdevs": 4, 00:17:56.277 "num_base_bdevs_discovered": 3, 00:17:56.277 "num_base_bdevs_operational": 3, 00:17:56.277 "process": { 00:17:56.277 "type": "rebuild", 00:17:56.277 "target": "spare", 00:17:56.277 "progress": { 00:17:56.277 "blocks": 16384, 00:17:56.277 "percent": 25 00:17:56.277 } 00:17:56.277 }, 00:17:56.277 "base_bdevs_list": [ 00:17:56.277 { 00:17:56.277 "name": "spare", 00:17:56.277 "uuid": "6461320f-2c86-5a41-9fb2-c624669234fd", 00:17:56.277 "is_configured": true, 00:17:56.277 "data_offset": 2048, 00:17:56.277 "data_size": 63488 00:17:56.277 }, 00:17:56.277 { 00:17:56.277 "name": null, 00:17:56.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.277 "is_configured": false, 00:17:56.277 "data_offset": 0, 00:17:56.277 "data_size": 63488 00:17:56.277 }, 00:17:56.277 { 00:17:56.277 "name": "BaseBdev3", 00:17:56.277 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:17:56.277 "is_configured": true, 00:17:56.277 "data_offset": 2048, 00:17:56.277 "data_size": 63488 00:17:56.277 }, 00:17:56.277 { 00:17:56.277 "name": "BaseBdev4", 00:17:56.277 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:17:56.277 "is_configured": true, 00:17:56.277 "data_offset": 2048, 00:17:56.277 "data_size": 63488 00:17:56.277 } 00:17:56.277 ] 00:17:56.277 }' 00:17:56.277 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.277 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.277 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.277 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.277 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:56.277 [2024-11-27 08:49:52.995240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:56.536 [2024-11-27 08:49:53.210963] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:56.794 [2024-11-27 08:49:53.450513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:57.052 100.80 IOPS, 302.40 MiB/s [2024-11-27T08:49:53.812Z] [2024-11-27 08:49:53.564150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:57.052 [2024-11-27 08:49:53.564678] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:57.311 [2024-11-27 08:49:53.891614] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:57.311 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:57.311 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.311 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.311 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.311 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.311 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.311 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.311 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.311 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.311 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.311 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.311 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.311 "name": "raid_bdev1", 00:17:57.311 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:17:57.311 "strip_size_kb": 0, 00:17:57.311 "state": "online", 00:17:57.311 "raid_level": "raid1", 00:17:57.311 "superblock": true, 00:17:57.311 "num_base_bdevs": 4, 00:17:57.311 "num_base_bdevs_discovered": 3, 00:17:57.311 "num_base_bdevs_operational": 3, 00:17:57.311 "process": { 00:17:57.311 "type": "rebuild", 00:17:57.311 "target": "spare", 00:17:57.311 "progress": { 00:17:57.311 "blocks": 32768, 00:17:57.311 "percent": 51 00:17:57.311 } 00:17:57.311 }, 00:17:57.311 "base_bdevs_list": [ 00:17:57.311 { 00:17:57.311 "name": "spare", 00:17:57.311 "uuid": "6461320f-2c86-5a41-9fb2-c624669234fd", 00:17:57.311 "is_configured": true, 00:17:57.311 "data_offset": 2048, 00:17:57.311 "data_size": 63488 00:17:57.311 }, 00:17:57.311 { 00:17:57.311 "name": null, 00:17:57.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.311 "is_configured": false, 00:17:57.311 "data_offset": 0, 00:17:57.311 "data_size": 63488 00:17:57.311 }, 00:17:57.311 { 00:17:57.311 "name": "BaseBdev3", 00:17:57.311 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:17:57.311 "is_configured": true, 00:17:57.311 "data_offset": 2048, 00:17:57.311 "data_size": 63488 00:17:57.311 }, 00:17:57.311 { 00:17:57.311 "name": "BaseBdev4", 00:17:57.311 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:17:57.311 "is_configured": true, 00:17:57.311 "data_offset": 2048, 00:17:57.311 "data_size": 63488 00:17:57.311 } 00:17:57.311 ] 00:17:57.311 }' 00:17:57.311 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.311 [2024-11-27 08:49:54.013875] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:57.311 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.311 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.569 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.569 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:57.569 [2024-11-27 08:49:54.261251] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:57.826 [2024-11-27 08:49:54.527160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:58.391 92.00 IOPS, 276.00 MiB/s [2024-11-27T08:49:55.151Z] 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:58.391 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.391 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.391 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.391 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.391 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.391 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.391 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.391 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.391 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.391 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.650 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.650 "name": "raid_bdev1", 00:17:58.650 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:17:58.650 "strip_size_kb": 0, 00:17:58.650 "state": "online", 00:17:58.650 "raid_level": "raid1", 00:17:58.650 "superblock": true, 00:17:58.650 "num_base_bdevs": 4, 00:17:58.650 "num_base_bdevs_discovered": 3, 00:17:58.650 "num_base_bdevs_operational": 3, 00:17:58.650 "process": { 00:17:58.650 "type": "rebuild", 00:17:58.650 "target": "spare", 00:17:58.650 "progress": { 00:17:58.650 "blocks": 49152, 00:17:58.650 "percent": 77 00:17:58.650 } 00:17:58.650 }, 00:17:58.650 "base_bdevs_list": [ 00:17:58.650 { 00:17:58.650 "name": "spare", 00:17:58.650 "uuid": "6461320f-2c86-5a41-9fb2-c624669234fd", 00:17:58.650 "is_configured": true, 00:17:58.650 "data_offset": 2048, 00:17:58.650 "data_size": 63488 00:17:58.650 }, 00:17:58.650 { 00:17:58.650 "name": null, 00:17:58.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.650 "is_configured": false, 00:17:58.650 "data_offset": 0, 00:17:58.650 "data_size": 63488 00:17:58.650 }, 00:17:58.650 { 00:17:58.650 "name": "BaseBdev3", 00:17:58.650 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:17:58.650 "is_configured": true, 00:17:58.650 "data_offset": 2048, 00:17:58.650 "data_size": 63488 00:17:58.650 }, 00:17:58.650 { 00:17:58.650 "name": "BaseBdev4", 00:17:58.650 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:17:58.650 "is_configured": true, 00:17:58.650 "data_offset": 2048, 00:17:58.650 "data_size": 63488 00:17:58.650 } 00:17:58.650 ] 00:17:58.650 }' 00:17:58.650 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.650 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.650 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.650 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.650 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:59.301 84.14 IOPS, 252.43 MiB/s [2024-11-27T08:49:56.061Z] [2024-11-27 08:49:55.884420] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:59.301 [2024-11-27 08:49:55.992389] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:59.301 [2024-11-27 08:49:55.999184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.560 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:59.560 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.560 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.560 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.560 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.560 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.560 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.560 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.560 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.560 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.560 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.818 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.818 "name": "raid_bdev1", 00:17:59.819 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:17:59.819 "strip_size_kb": 0, 00:17:59.819 "state": "online", 00:17:59.819 "raid_level": "raid1", 00:17:59.819 "superblock": true, 00:17:59.819 "num_base_bdevs": 4, 00:17:59.819 "num_base_bdevs_discovered": 3, 00:17:59.819 "num_base_bdevs_operational": 3, 00:17:59.819 "base_bdevs_list": [ 00:17:59.819 { 00:17:59.819 "name": "spare", 00:17:59.819 "uuid": "6461320f-2c86-5a41-9fb2-c624669234fd", 00:17:59.819 "is_configured": true, 00:17:59.819 "data_offset": 2048, 00:17:59.819 "data_size": 63488 00:17:59.819 }, 00:17:59.819 { 00:17:59.819 "name": null, 00:17:59.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.819 "is_configured": false, 00:17:59.819 "data_offset": 0, 00:17:59.819 "data_size": 63488 00:17:59.819 }, 00:17:59.819 { 00:17:59.819 "name": "BaseBdev3", 00:17:59.819 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:17:59.819 "is_configured": true, 00:17:59.819 "data_offset": 2048, 00:17:59.819 "data_size": 63488 00:17:59.819 }, 00:17:59.819 { 00:17:59.819 "name": "BaseBdev4", 00:17:59.819 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:17:59.819 "is_configured": true, 00:17:59.819 "data_offset": 2048, 00:17:59.819 "data_size": 63488 00:17:59.819 } 00:17:59.819 ] 00:17:59.819 }' 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.819 "name": "raid_bdev1", 00:17:59.819 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:17:59.819 "strip_size_kb": 0, 00:17:59.819 "state": "online", 00:17:59.819 "raid_level": "raid1", 00:17:59.819 "superblock": true, 00:17:59.819 "num_base_bdevs": 4, 00:17:59.819 "num_base_bdevs_discovered": 3, 00:17:59.819 "num_base_bdevs_operational": 3, 00:17:59.819 "base_bdevs_list": [ 00:17:59.819 { 00:17:59.819 "name": "spare", 00:17:59.819 "uuid": "6461320f-2c86-5a41-9fb2-c624669234fd", 00:17:59.819 "is_configured": true, 00:17:59.819 "data_offset": 2048, 00:17:59.819 "data_size": 63488 00:17:59.819 }, 00:17:59.819 { 00:17:59.819 "name": null, 00:17:59.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.819 "is_configured": false, 00:17:59.819 "data_offset": 0, 00:17:59.819 "data_size": 63488 00:17:59.819 }, 00:17:59.819 { 00:17:59.819 "name": "BaseBdev3", 00:17:59.819 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:17:59.819 "is_configured": true, 00:17:59.819 "data_offset": 2048, 00:17:59.819 "data_size": 63488 00:17:59.819 }, 00:17:59.819 { 00:17:59.819 "name": "BaseBdev4", 00:17:59.819 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:17:59.819 "is_configured": true, 00:17:59.819 "data_offset": 2048, 00:17:59.819 "data_size": 63488 00:17:59.819 } 00:17:59.819 ] 00:17:59.819 }' 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:59.819 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.077 77.88 IOPS, 233.62 MiB/s [2024-11-27T08:49:56.837Z] 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.077 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.078 "name": "raid_bdev1", 00:18:00.078 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:18:00.078 "strip_size_kb": 0, 00:18:00.078 "state": "online", 00:18:00.078 "raid_level": "raid1", 00:18:00.078 "superblock": true, 00:18:00.078 "num_base_bdevs": 4, 00:18:00.078 "num_base_bdevs_discovered": 3, 00:18:00.078 "num_base_bdevs_operational": 3, 00:18:00.078 "base_bdevs_list": [ 00:18:00.078 { 00:18:00.078 "name": "spare", 00:18:00.078 "uuid": "6461320f-2c86-5a41-9fb2-c624669234fd", 00:18:00.078 "is_configured": true, 00:18:00.078 "data_offset": 2048, 00:18:00.078 "data_size": 63488 00:18:00.078 }, 00:18:00.078 { 00:18:00.078 "name": null, 00:18:00.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.078 "is_configured": false, 00:18:00.078 "data_offset": 0, 00:18:00.078 "data_size": 63488 00:18:00.078 }, 00:18:00.078 { 00:18:00.078 "name": "BaseBdev3", 00:18:00.078 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:18:00.078 "is_configured": true, 00:18:00.078 "data_offset": 2048, 00:18:00.078 "data_size": 63488 00:18:00.078 }, 00:18:00.078 { 00:18:00.078 "name": "BaseBdev4", 00:18:00.078 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:18:00.078 "is_configured": true, 00:18:00.078 "data_offset": 2048, 00:18:00.078 "data_size": 63488 00:18:00.078 } 00:18:00.078 ] 00:18:00.078 }' 00:18:00.078 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.078 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:00.336 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:00.336 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.336 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:00.595 [2024-11-27 08:49:57.095214] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.595 [2024-11-27 08:49:57.095285] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.595 00:18:00.595 Latency(us) 00:18:00.595 [2024-11-27T08:49:57.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.595 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:00.595 raid_bdev1 : 8.57 74.67 224.00 0.00 0.00 17704.54 299.75 135361.63 00:18:00.595 [2024-11-27T08:49:57.355Z] =================================================================================================================== 00:18:00.595 [2024-11-27T08:49:57.355Z] Total : 74.67 224.00 0.00 0.00 17704.54 299.75 135361.63 00:18:00.595 { 00:18:00.595 "results": [ 00:18:00.595 { 00:18:00.595 "job": "raid_bdev1", 00:18:00.595 "core_mask": "0x1", 00:18:00.595 "workload": "randrw", 00:18:00.595 "percentage": 50, 00:18:00.595 "status": "finished", 00:18:00.595 "queue_depth": 2, 00:18:00.595 "io_size": 3145728, 00:18:00.595 "runtime": 8.571513, 00:18:00.595 "iops": 74.66593120724427, 00:18:00.595 "mibps": 223.9977936217328, 00:18:00.595 "io_failed": 0, 00:18:00.595 "io_timeout": 0, 00:18:00.595 "avg_latency_us": 17704.536727272727, 00:18:00.595 "min_latency_us": 299.75272727272727, 00:18:00.595 "max_latency_us": 135361.6290909091 00:18:00.595 } 00:18:00.595 ], 00:18:00.595 "core_count": 1 00:18:00.595 } 00:18:00.595 [2024-11-27 08:49:57.140784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.595 [2024-11-27 08:49:57.140886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.595 [2024-11-27 08:49:57.141052] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.595 [2024-11-27 08:49:57.141077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:00.595 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:00.854 /dev/nbd0 00:18:00.854 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:00.854 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:00.854 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:18:00.854 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local i 00:18:00.854 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:18:00.854 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:18:00.854 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:18:00.854 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # break 00:18:00.854 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:18:00.854 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:00.855 1+0 records in 00:18:00.855 1+0 records out 00:18:00.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735464 s, 5.6 MB/s 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # size=4096 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # return 0 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:00.855 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:18:01.113 /dev/nbd1 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local i 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # break 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:01.113 1+0 records in 00:18:01.113 1+0 records out 00:18:01.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595509 s, 6.9 MB/s 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # size=4096 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # return 0 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:01.113 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:01.372 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:01.372 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:01.372 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:01.372 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:01.372 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:01.372 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:01.372 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:01.631 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:18:02.198 /dev/nbd1 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local i 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # break 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.198 1+0 records in 00:18:02.198 1+0 records out 00:18:02.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418344 s, 9.8 MB/s 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # size=4096 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # return 0 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:02.198 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:02.456 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:02.456 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:02.456 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:02.456 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:02.456 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:02.456 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:02.456 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:02.456 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:02.456 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:02.456 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:02.456 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:02.456 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:02.456 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:02.456 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:02.456 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.714 [2024-11-27 08:49:59.439969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:02.714 [2024-11-27 08:49:59.440102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.714 [2024-11-27 08:49:59.440137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:02.714 [2024-11-27 08:49:59.440156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.714 [2024-11-27 08:49:59.443588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.714 [2024-11-27 08:49:59.443657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:02.714 [2024-11-27 08:49:59.443834] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:02.714 [2024-11-27 08:49:59.443915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:02.714 [2024-11-27 08:49:59.444172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:02.714 [2024-11-27 08:49:59.444331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:02.714 spare 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.714 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.972 [2024-11-27 08:49:59.544504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:02.972 [2024-11-27 08:49:59.544603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:02.972 [2024-11-27 08:49:59.545124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:18:02.972 [2024-11-27 08:49:59.545470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:02.972 [2024-11-27 08:49:59.545488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:02.972 [2024-11-27 08:49:59.545771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.972 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.972 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:02.972 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.972 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.972 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.972 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.972 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.972 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.972 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.972 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.972 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.972 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.972 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.973 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.973 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.973 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.973 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.973 "name": "raid_bdev1", 00:18:02.973 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:18:02.973 "strip_size_kb": 0, 00:18:02.973 "state": "online", 00:18:02.973 "raid_level": "raid1", 00:18:02.973 "superblock": true, 00:18:02.973 "num_base_bdevs": 4, 00:18:02.973 "num_base_bdevs_discovered": 3, 00:18:02.973 "num_base_bdevs_operational": 3, 00:18:02.973 "base_bdevs_list": [ 00:18:02.973 { 00:18:02.973 "name": "spare", 00:18:02.973 "uuid": "6461320f-2c86-5a41-9fb2-c624669234fd", 00:18:02.973 "is_configured": true, 00:18:02.973 "data_offset": 2048, 00:18:02.973 "data_size": 63488 00:18:02.973 }, 00:18:02.973 { 00:18:02.973 "name": null, 00:18:02.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.973 "is_configured": false, 00:18:02.973 "data_offset": 2048, 00:18:02.973 "data_size": 63488 00:18:02.973 }, 00:18:02.973 { 00:18:02.973 "name": "BaseBdev3", 00:18:02.973 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:18:02.973 "is_configured": true, 00:18:02.973 "data_offset": 2048, 00:18:02.973 "data_size": 63488 00:18:02.973 }, 00:18:02.973 { 00:18:02.973 "name": "BaseBdev4", 00:18:02.973 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:18:02.973 "is_configured": true, 00:18:02.973 "data_offset": 2048, 00:18:02.973 "data_size": 63488 00:18:02.973 } 00:18:02.973 ] 00:18:02.973 }' 00:18:02.973 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.973 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.539 "name": "raid_bdev1", 00:18:03.539 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:18:03.539 "strip_size_kb": 0, 00:18:03.539 "state": "online", 00:18:03.539 "raid_level": "raid1", 00:18:03.539 "superblock": true, 00:18:03.539 "num_base_bdevs": 4, 00:18:03.539 "num_base_bdevs_discovered": 3, 00:18:03.539 "num_base_bdevs_operational": 3, 00:18:03.539 "base_bdevs_list": [ 00:18:03.539 { 00:18:03.539 "name": "spare", 00:18:03.539 "uuid": "6461320f-2c86-5a41-9fb2-c624669234fd", 00:18:03.539 "is_configured": true, 00:18:03.539 "data_offset": 2048, 00:18:03.539 "data_size": 63488 00:18:03.539 }, 00:18:03.539 { 00:18:03.539 "name": null, 00:18:03.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.539 "is_configured": false, 00:18:03.539 "data_offset": 2048, 00:18:03.539 "data_size": 63488 00:18:03.539 }, 00:18:03.539 { 00:18:03.539 "name": "BaseBdev3", 00:18:03.539 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:18:03.539 "is_configured": true, 00:18:03.539 "data_offset": 2048, 00:18:03.539 "data_size": 63488 00:18:03.539 }, 00:18:03.539 { 00:18:03.539 "name": "BaseBdev4", 00:18:03.539 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:18:03.539 "is_configured": true, 00:18:03.539 "data_offset": 2048, 00:18:03.539 "data_size": 63488 00:18:03.539 } 00:18:03.539 ] 00:18:03.539 }' 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:03.539 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.797 [2024-11-27 08:50:00.316428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.797 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.798 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.798 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.798 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.798 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.798 "name": "raid_bdev1", 00:18:03.798 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:18:03.798 "strip_size_kb": 0, 00:18:03.798 "state": "online", 00:18:03.798 "raid_level": "raid1", 00:18:03.798 "superblock": true, 00:18:03.798 "num_base_bdevs": 4, 00:18:03.798 "num_base_bdevs_discovered": 2, 00:18:03.798 "num_base_bdevs_operational": 2, 00:18:03.798 "base_bdevs_list": [ 00:18:03.798 { 00:18:03.798 "name": null, 00:18:03.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.798 "is_configured": false, 00:18:03.798 "data_offset": 0, 00:18:03.798 "data_size": 63488 00:18:03.798 }, 00:18:03.798 { 00:18:03.798 "name": null, 00:18:03.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.798 "is_configured": false, 00:18:03.798 "data_offset": 2048, 00:18:03.798 "data_size": 63488 00:18:03.798 }, 00:18:03.798 { 00:18:03.798 "name": "BaseBdev3", 00:18:03.798 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:18:03.798 "is_configured": true, 00:18:03.798 "data_offset": 2048, 00:18:03.798 "data_size": 63488 00:18:03.798 }, 00:18:03.798 { 00:18:03.798 "name": "BaseBdev4", 00:18:03.798 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:18:03.798 "is_configured": true, 00:18:03.798 "data_offset": 2048, 00:18:03.798 "data_size": 63488 00:18:03.798 } 00:18:03.798 ] 00:18:03.798 }' 00:18:03.798 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.798 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.056 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:04.056 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.056 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.056 [2024-11-27 08:50:00.812629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.056 [2024-11-27 08:50:00.812965] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:04.056 [2024-11-27 08:50:00.812995] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:04.056 [2024-11-27 08:50:00.813057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.402 [2024-11-27 08:50:00.827744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:18:04.402 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.402 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:04.402 [2024-11-27 08:50:00.830945] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.358 "name": "raid_bdev1", 00:18:05.358 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:18:05.358 "strip_size_kb": 0, 00:18:05.358 "state": "online", 00:18:05.358 "raid_level": "raid1", 00:18:05.358 "superblock": true, 00:18:05.358 "num_base_bdevs": 4, 00:18:05.358 "num_base_bdevs_discovered": 3, 00:18:05.358 "num_base_bdevs_operational": 3, 00:18:05.358 "process": { 00:18:05.358 "type": "rebuild", 00:18:05.358 "target": "spare", 00:18:05.358 "progress": { 00:18:05.358 "blocks": 20480, 00:18:05.358 "percent": 32 00:18:05.358 } 00:18:05.358 }, 00:18:05.358 "base_bdevs_list": [ 00:18:05.358 { 00:18:05.358 "name": "spare", 00:18:05.358 "uuid": "6461320f-2c86-5a41-9fb2-c624669234fd", 00:18:05.358 "is_configured": true, 00:18:05.358 "data_offset": 2048, 00:18:05.358 "data_size": 63488 00:18:05.358 }, 00:18:05.358 { 00:18:05.358 "name": null, 00:18:05.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.358 "is_configured": false, 00:18:05.358 "data_offset": 2048, 00:18:05.358 "data_size": 63488 00:18:05.358 }, 00:18:05.358 { 00:18:05.358 "name": "BaseBdev3", 00:18:05.358 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:18:05.358 "is_configured": true, 00:18:05.358 "data_offset": 2048, 00:18:05.358 "data_size": 63488 00:18:05.358 }, 00:18:05.358 { 00:18:05.358 "name": "BaseBdev4", 00:18:05.358 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:18:05.358 "is_configured": true, 00:18:05.358 "data_offset": 2048, 00:18:05.358 "data_size": 63488 00:18:05.358 } 00:18:05.358 ] 00:18:05.358 }' 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.358 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.358 [2024-11-27 08:50:01.993252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.358 [2024-11-27 08:50:02.042820] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:05.358 [2024-11-27 08:50:02.042980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.358 [2024-11-27 08:50:02.043011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.358 [2024-11-27 08:50:02.043028] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.358 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.618 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.618 "name": "raid_bdev1", 00:18:05.618 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:18:05.618 "strip_size_kb": 0, 00:18:05.618 "state": "online", 00:18:05.618 "raid_level": "raid1", 00:18:05.618 "superblock": true, 00:18:05.618 "num_base_bdevs": 4, 00:18:05.618 "num_base_bdevs_discovered": 2, 00:18:05.618 "num_base_bdevs_operational": 2, 00:18:05.618 "base_bdevs_list": [ 00:18:05.618 { 00:18:05.618 "name": null, 00:18:05.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.618 "is_configured": false, 00:18:05.618 "data_offset": 0, 00:18:05.618 "data_size": 63488 00:18:05.618 }, 00:18:05.618 { 00:18:05.618 "name": null, 00:18:05.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.618 "is_configured": false, 00:18:05.618 "data_offset": 2048, 00:18:05.618 "data_size": 63488 00:18:05.618 }, 00:18:05.618 { 00:18:05.618 "name": "BaseBdev3", 00:18:05.618 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:18:05.618 "is_configured": true, 00:18:05.618 "data_offset": 2048, 00:18:05.618 "data_size": 63488 00:18:05.618 }, 00:18:05.618 { 00:18:05.618 "name": "BaseBdev4", 00:18:05.618 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:18:05.618 "is_configured": true, 00:18:05.618 "data_offset": 2048, 00:18:05.618 "data_size": 63488 00:18:05.618 } 00:18:05.618 ] 00:18:05.618 }' 00:18:05.618 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.618 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.877 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:05.877 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.877 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.877 [2024-11-27 08:50:02.588195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:05.877 [2024-11-27 08:50:02.588329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.877 [2024-11-27 08:50:02.588408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:05.877 [2024-11-27 08:50:02.588429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.877 [2024-11-27 08:50:02.589130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.877 [2024-11-27 08:50:02.589171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:05.877 [2024-11-27 08:50:02.589312] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:05.877 [2024-11-27 08:50:02.589357] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:05.877 [2024-11-27 08:50:02.589386] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:05.877 [2024-11-27 08:50:02.589438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.877 [2024-11-27 08:50:02.604270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:18:05.877 spare 00:18:05.877 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.877 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:05.877 [2024-11-27 08:50:02.607202] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.250 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.250 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.250 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.250 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.250 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.250 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.250 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.250 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.250 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.251 "name": "raid_bdev1", 00:18:07.251 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:18:07.251 "strip_size_kb": 0, 00:18:07.251 "state": "online", 00:18:07.251 "raid_level": "raid1", 00:18:07.251 "superblock": true, 00:18:07.251 "num_base_bdevs": 4, 00:18:07.251 "num_base_bdevs_discovered": 3, 00:18:07.251 "num_base_bdevs_operational": 3, 00:18:07.251 "process": { 00:18:07.251 "type": "rebuild", 00:18:07.251 "target": "spare", 00:18:07.251 "progress": { 00:18:07.251 "blocks": 20480, 00:18:07.251 "percent": 32 00:18:07.251 } 00:18:07.251 }, 00:18:07.251 "base_bdevs_list": [ 00:18:07.251 { 00:18:07.251 "name": "spare", 00:18:07.251 "uuid": "6461320f-2c86-5a41-9fb2-c624669234fd", 00:18:07.251 "is_configured": true, 00:18:07.251 "data_offset": 2048, 00:18:07.251 "data_size": 63488 00:18:07.251 }, 00:18:07.251 { 00:18:07.251 "name": null, 00:18:07.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.251 "is_configured": false, 00:18:07.251 "data_offset": 2048, 00:18:07.251 "data_size": 63488 00:18:07.251 }, 00:18:07.251 { 00:18:07.251 "name": "BaseBdev3", 00:18:07.251 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:18:07.251 "is_configured": true, 00:18:07.251 "data_offset": 2048, 00:18:07.251 "data_size": 63488 00:18:07.251 }, 00:18:07.251 { 00:18:07.251 "name": "BaseBdev4", 00:18:07.251 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:18:07.251 "is_configured": true, 00:18:07.251 "data_offset": 2048, 00:18:07.251 "data_size": 63488 00:18:07.251 } 00:18:07.251 ] 00:18:07.251 }' 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.251 [2024-11-27 08:50:03.781452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.251 [2024-11-27 08:50:03.819137] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:07.251 [2024-11-27 08:50:03.819557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.251 [2024-11-27 08:50:03.819711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.251 [2024-11-27 08:50:03.819765] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.251 "name": "raid_bdev1", 00:18:07.251 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:18:07.251 "strip_size_kb": 0, 00:18:07.251 "state": "online", 00:18:07.251 "raid_level": "raid1", 00:18:07.251 "superblock": true, 00:18:07.251 "num_base_bdevs": 4, 00:18:07.251 "num_base_bdevs_discovered": 2, 00:18:07.251 "num_base_bdevs_operational": 2, 00:18:07.251 "base_bdevs_list": [ 00:18:07.251 { 00:18:07.251 "name": null, 00:18:07.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.251 "is_configured": false, 00:18:07.251 "data_offset": 0, 00:18:07.251 "data_size": 63488 00:18:07.251 }, 00:18:07.251 { 00:18:07.251 "name": null, 00:18:07.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.251 "is_configured": false, 00:18:07.251 "data_offset": 2048, 00:18:07.251 "data_size": 63488 00:18:07.251 }, 00:18:07.251 { 00:18:07.251 "name": "BaseBdev3", 00:18:07.251 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:18:07.251 "is_configured": true, 00:18:07.251 "data_offset": 2048, 00:18:07.251 "data_size": 63488 00:18:07.251 }, 00:18:07.251 { 00:18:07.251 "name": "BaseBdev4", 00:18:07.251 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:18:07.251 "is_configured": true, 00:18:07.251 "data_offset": 2048, 00:18:07.251 "data_size": 63488 00:18:07.251 } 00:18:07.251 ] 00:18:07.251 }' 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.251 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.817 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.817 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.817 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.817 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.817 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.817 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.817 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.817 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.817 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.817 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.817 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.817 "name": "raid_bdev1", 00:18:07.817 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:18:07.817 "strip_size_kb": 0, 00:18:07.817 "state": "online", 00:18:07.817 "raid_level": "raid1", 00:18:07.817 "superblock": true, 00:18:07.817 "num_base_bdevs": 4, 00:18:07.817 "num_base_bdevs_discovered": 2, 00:18:07.817 "num_base_bdevs_operational": 2, 00:18:07.817 "base_bdevs_list": [ 00:18:07.817 { 00:18:07.817 "name": null, 00:18:07.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.817 "is_configured": false, 00:18:07.817 "data_offset": 0, 00:18:07.817 "data_size": 63488 00:18:07.817 }, 00:18:07.817 { 00:18:07.817 "name": null, 00:18:07.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.817 "is_configured": false, 00:18:07.817 "data_offset": 2048, 00:18:07.817 "data_size": 63488 00:18:07.817 }, 00:18:07.817 { 00:18:07.817 "name": "BaseBdev3", 00:18:07.817 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:18:07.817 "is_configured": true, 00:18:07.817 "data_offset": 2048, 00:18:07.817 "data_size": 63488 00:18:07.817 }, 00:18:07.817 { 00:18:07.817 "name": "BaseBdev4", 00:18:07.817 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:18:07.817 "is_configured": true, 00:18:07.817 "data_offset": 2048, 00:18:07.817 "data_size": 63488 00:18:07.817 } 00:18:07.817 ] 00:18:07.817 }' 00:18:07.817 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.817 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.817 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.075 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.075 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:08.075 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.075 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.075 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.075 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:08.075 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.075 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.075 [2024-11-27 08:50:04.609093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:08.075 [2024-11-27 08:50:04.609209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.075 [2024-11-27 08:50:04.609256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:18:08.075 [2024-11-27 08:50:04.609273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.075 [2024-11-27 08:50:04.609980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.075 [2024-11-27 08:50:04.610033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:08.075 [2024-11-27 08:50:04.610167] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:08.075 [2024-11-27 08:50:04.610195] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:08.075 [2024-11-27 08:50:04.610211] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:08.075 [2024-11-27 08:50:04.610227] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:08.075 BaseBdev1 00:18:08.075 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.075 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:09.010 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:09.010 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.010 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.010 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.010 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.011 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.011 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.011 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.011 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.011 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.011 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.011 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.011 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.011 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.011 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.011 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.011 "name": "raid_bdev1", 00:18:09.011 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:18:09.011 "strip_size_kb": 0, 00:18:09.011 "state": "online", 00:18:09.011 "raid_level": "raid1", 00:18:09.011 "superblock": true, 00:18:09.011 "num_base_bdevs": 4, 00:18:09.011 "num_base_bdevs_discovered": 2, 00:18:09.011 "num_base_bdevs_operational": 2, 00:18:09.011 "base_bdevs_list": [ 00:18:09.011 { 00:18:09.011 "name": null, 00:18:09.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.011 "is_configured": false, 00:18:09.011 "data_offset": 0, 00:18:09.011 "data_size": 63488 00:18:09.011 }, 00:18:09.011 { 00:18:09.011 "name": null, 00:18:09.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.011 "is_configured": false, 00:18:09.011 "data_offset": 2048, 00:18:09.011 "data_size": 63488 00:18:09.011 }, 00:18:09.011 { 00:18:09.011 "name": "BaseBdev3", 00:18:09.011 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:18:09.011 "is_configured": true, 00:18:09.011 "data_offset": 2048, 00:18:09.011 "data_size": 63488 00:18:09.011 }, 00:18:09.011 { 00:18:09.011 "name": "BaseBdev4", 00:18:09.011 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:18:09.011 "is_configured": true, 00:18:09.011 "data_offset": 2048, 00:18:09.011 "data_size": 63488 00:18:09.011 } 00:18:09.011 ] 00:18:09.011 }' 00:18:09.011 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.011 08:50:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.577 "name": "raid_bdev1", 00:18:09.577 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:18:09.577 "strip_size_kb": 0, 00:18:09.577 "state": "online", 00:18:09.577 "raid_level": "raid1", 00:18:09.577 "superblock": true, 00:18:09.577 "num_base_bdevs": 4, 00:18:09.577 "num_base_bdevs_discovered": 2, 00:18:09.577 "num_base_bdevs_operational": 2, 00:18:09.577 "base_bdevs_list": [ 00:18:09.577 { 00:18:09.577 "name": null, 00:18:09.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.577 "is_configured": false, 00:18:09.577 "data_offset": 0, 00:18:09.577 "data_size": 63488 00:18:09.577 }, 00:18:09.577 { 00:18:09.577 "name": null, 00:18:09.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.577 "is_configured": false, 00:18:09.577 "data_offset": 2048, 00:18:09.577 "data_size": 63488 00:18:09.577 }, 00:18:09.577 { 00:18:09.577 "name": "BaseBdev3", 00:18:09.577 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:18:09.577 "is_configured": true, 00:18:09.577 "data_offset": 2048, 00:18:09.577 "data_size": 63488 00:18:09.577 }, 00:18:09.577 { 00:18:09.577 "name": "BaseBdev4", 00:18:09.577 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:18:09.577 "is_configured": true, 00:18:09.577 "data_offset": 2048, 00:18:09.577 "data_size": 63488 00:18:09.577 } 00:18:09.577 ] 00:18:09.577 }' 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:18:09.577 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:09.578 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:09.578 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.578 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:09.578 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.578 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:09.578 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.578 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.578 [2024-11-27 08:50:06.333889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.835 [2024-11-27 08:50:06.334368] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:09.835 [2024-11-27 08:50:06.334403] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:09.835 request: 00:18:09.835 { 00:18:09.835 "base_bdev": "BaseBdev1", 00:18:09.835 "raid_bdev": "raid_bdev1", 00:18:09.835 "method": "bdev_raid_add_base_bdev", 00:18:09.835 "req_id": 1 00:18:09.835 } 00:18:09.835 Got JSON-RPC error response 00:18:09.835 response: 00:18:09.835 { 00:18:09.835 "code": -22, 00:18:09.835 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:09.835 } 00:18:09.835 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:09.835 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:18:09.835 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.835 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.835 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.835 08:50:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.770 "name": "raid_bdev1", 00:18:10.770 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:18:10.770 "strip_size_kb": 0, 00:18:10.770 "state": "online", 00:18:10.770 "raid_level": "raid1", 00:18:10.770 "superblock": true, 00:18:10.770 "num_base_bdevs": 4, 00:18:10.770 "num_base_bdevs_discovered": 2, 00:18:10.770 "num_base_bdevs_operational": 2, 00:18:10.770 "base_bdevs_list": [ 00:18:10.770 { 00:18:10.770 "name": null, 00:18:10.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.770 "is_configured": false, 00:18:10.770 "data_offset": 0, 00:18:10.770 "data_size": 63488 00:18:10.770 }, 00:18:10.770 { 00:18:10.770 "name": null, 00:18:10.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.770 "is_configured": false, 00:18:10.770 "data_offset": 2048, 00:18:10.770 "data_size": 63488 00:18:10.770 }, 00:18:10.770 { 00:18:10.770 "name": "BaseBdev3", 00:18:10.770 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:18:10.770 "is_configured": true, 00:18:10.770 "data_offset": 2048, 00:18:10.770 "data_size": 63488 00:18:10.770 }, 00:18:10.770 { 00:18:10.770 "name": "BaseBdev4", 00:18:10.770 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:18:10.770 "is_configured": true, 00:18:10.770 "data_offset": 2048, 00:18:10.770 "data_size": 63488 00:18:10.770 } 00:18:10.770 ] 00:18:10.770 }' 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.770 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:11.335 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.336 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.336 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.336 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.336 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.336 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.336 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.336 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.336 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:11.336 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.336 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.336 "name": "raid_bdev1", 00:18:11.336 "uuid": "a79cec3b-a73f-4734-aacc-8d60e8c81956", 00:18:11.336 "strip_size_kb": 0, 00:18:11.336 "state": "online", 00:18:11.336 "raid_level": "raid1", 00:18:11.336 "superblock": true, 00:18:11.336 "num_base_bdevs": 4, 00:18:11.336 "num_base_bdevs_discovered": 2, 00:18:11.336 "num_base_bdevs_operational": 2, 00:18:11.336 "base_bdevs_list": [ 00:18:11.336 { 00:18:11.336 "name": null, 00:18:11.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.336 "is_configured": false, 00:18:11.336 "data_offset": 0, 00:18:11.336 "data_size": 63488 00:18:11.336 }, 00:18:11.336 { 00:18:11.336 "name": null, 00:18:11.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.336 "is_configured": false, 00:18:11.336 "data_offset": 2048, 00:18:11.336 "data_size": 63488 00:18:11.336 }, 00:18:11.336 { 00:18:11.336 "name": "BaseBdev3", 00:18:11.336 "uuid": "31000492-573c-5513-9367-50aa9e0e1c18", 00:18:11.336 "is_configured": true, 00:18:11.336 "data_offset": 2048, 00:18:11.336 "data_size": 63488 00:18:11.336 }, 00:18:11.336 { 00:18:11.336 "name": "BaseBdev4", 00:18:11.336 "uuid": "ff877d2d-516e-58c0-b38a-d94ca6eba695", 00:18:11.336 "is_configured": true, 00:18:11.336 "data_offset": 2048, 00:18:11.336 "data_size": 63488 00:18:11.336 } 00:18:11.336 ] 00:18:11.336 }' 00:18:11.336 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.336 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:11.336 08:50:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.336 08:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:11.336 08:50:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79624 00:18:11.336 08:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # '[' -z 79624 ']' 00:18:11.336 08:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # kill -0 79624 00:18:11.336 08:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # uname 00:18:11.336 08:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:11.336 08:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 79624 00:18:11.336 killing process with pid 79624 00:18:11.336 Received shutdown signal, test time was about 19.525207 seconds 00:18:11.336 00:18:11.336 Latency(us) 00:18:11.336 [2024-11-27T08:50:08.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.336 [2024-11-27T08:50:08.096Z] =================================================================================================================== 00:18:11.336 [2024-11-27T08:50:08.096Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.336 08:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:18:11.336 08:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:18:11.336 08:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # echo 'killing process with pid 79624' 00:18:11.336 08:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # kill 79624 00:18:11.336 [2024-11-27 08:50:08.072784] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:11.336 08:50:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@975 -- # wait 79624 00:18:11.336 [2024-11-27 08:50:08.072974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.336 [2024-11-27 08:50:08.073082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.336 [2024-11-27 08:50:08.073110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:11.901 [2024-11-27 08:50:08.483880] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:13.275 08:50:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:13.275 00:18:13.275 real 0m23.314s 00:18:13.275 user 0m31.636s 00:18:13.275 sys 0m2.542s 00:18:13.275 08:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # xtrace_disable 00:18:13.275 08:50:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:13.275 ************************************ 00:18:13.275 END TEST raid_rebuild_test_sb_io 00:18:13.275 ************************************ 00:18:13.275 08:50:09 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:13.275 08:50:09 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:18:13.275 08:50:09 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:18:13.275 08:50:09 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:18:13.275 08:50:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:13.275 ************************************ 00:18:13.275 START TEST raid5f_state_function_test 00:18:13.275 ************************************ 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # raid_state_function_test raid5f 3 false 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:13.275 Process raid pid: 80363 00:18:13.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80363 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80363' 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80363 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@832 -- # '[' -z 80363 ']' 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:13.275 08:50:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.275 [2024-11-27 08:50:09.840184] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:18:13.275 [2024-11-27 08:50:09.840388] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.275 [2024-11-27 08:50:10.021060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.533 [2024-11-27 08:50:10.169078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.791 [2024-11-27 08:50:10.395998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.791 [2024-11-27 08:50:10.396360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@865 -- # return 0 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.357 [2024-11-27 08:50:10.878248] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:14.357 [2024-11-27 08:50:10.878325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:14.357 [2024-11-27 08:50:10.878369] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.357 [2024-11-27 08:50:10.878389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.357 [2024-11-27 08:50:10.878400] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:14.357 [2024-11-27 08:50:10.878415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.357 "name": "Existed_Raid", 00:18:14.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.357 "strip_size_kb": 64, 00:18:14.357 "state": "configuring", 00:18:14.357 "raid_level": "raid5f", 00:18:14.357 "superblock": false, 00:18:14.357 "num_base_bdevs": 3, 00:18:14.357 "num_base_bdevs_discovered": 0, 00:18:14.357 "num_base_bdevs_operational": 3, 00:18:14.357 "base_bdevs_list": [ 00:18:14.357 { 00:18:14.357 "name": "BaseBdev1", 00:18:14.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.357 "is_configured": false, 00:18:14.357 "data_offset": 0, 00:18:14.357 "data_size": 0 00:18:14.357 }, 00:18:14.357 { 00:18:14.357 "name": "BaseBdev2", 00:18:14.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.357 "is_configured": false, 00:18:14.357 "data_offset": 0, 00:18:14.357 "data_size": 0 00:18:14.357 }, 00:18:14.357 { 00:18:14.357 "name": "BaseBdev3", 00:18:14.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.357 "is_configured": false, 00:18:14.357 "data_offset": 0, 00:18:14.357 "data_size": 0 00:18:14.357 } 00:18:14.357 ] 00:18:14.357 }' 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.357 08:50:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.924 [2024-11-27 08:50:11.462317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:14.924 [2024-11-27 08:50:11.462396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.924 [2024-11-27 08:50:11.470324] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:14.924 [2024-11-27 08:50:11.470419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:14.924 [2024-11-27 08:50:11.470436] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.924 [2024-11-27 08:50:11.470464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.924 [2024-11-27 08:50:11.470474] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:14.924 [2024-11-27 08:50:11.470489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.924 [2024-11-27 08:50:11.523850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:14.924 BaseBdev1 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.924 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.924 [ 00:18:14.924 { 00:18:14.924 "name": "BaseBdev1", 00:18:14.924 "aliases": [ 00:18:14.924 "516cd28f-2957-4827-82a8-61de7c5b56d3" 00:18:14.924 ], 00:18:14.924 "product_name": "Malloc disk", 00:18:14.924 "block_size": 512, 00:18:14.924 "num_blocks": 65536, 00:18:14.924 "uuid": "516cd28f-2957-4827-82a8-61de7c5b56d3", 00:18:14.924 "assigned_rate_limits": { 00:18:14.924 "rw_ios_per_sec": 0, 00:18:14.924 "rw_mbytes_per_sec": 0, 00:18:14.924 "r_mbytes_per_sec": 0, 00:18:14.924 "w_mbytes_per_sec": 0 00:18:14.924 }, 00:18:14.924 "claimed": true, 00:18:14.924 "claim_type": "exclusive_write", 00:18:14.924 "zoned": false, 00:18:14.924 "supported_io_types": { 00:18:14.924 "read": true, 00:18:14.924 "write": true, 00:18:14.924 "unmap": true, 00:18:14.924 "flush": true, 00:18:14.924 "reset": true, 00:18:14.924 "nvme_admin": false, 00:18:14.924 "nvme_io": false, 00:18:14.924 "nvme_io_md": false, 00:18:14.924 "write_zeroes": true, 00:18:14.924 "zcopy": true, 00:18:14.925 "get_zone_info": false, 00:18:14.925 "zone_management": false, 00:18:14.925 "zone_append": false, 00:18:14.925 "compare": false, 00:18:14.925 "compare_and_write": false, 00:18:14.925 "abort": true, 00:18:14.925 "seek_hole": false, 00:18:14.925 "seek_data": false, 00:18:14.925 "copy": true, 00:18:14.925 "nvme_iov_md": false 00:18:14.925 }, 00:18:14.925 "memory_domains": [ 00:18:14.925 { 00:18:14.925 "dma_device_id": "system", 00:18:14.925 "dma_device_type": 1 00:18:14.925 }, 00:18:14.925 { 00:18:14.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.925 "dma_device_type": 2 00:18:14.925 } 00:18:14.925 ], 00:18:14.925 "driver_specific": {} 00:18:14.925 } 00:18:14.925 ] 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.925 "name": "Existed_Raid", 00:18:14.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.925 "strip_size_kb": 64, 00:18:14.925 "state": "configuring", 00:18:14.925 "raid_level": "raid5f", 00:18:14.925 "superblock": false, 00:18:14.925 "num_base_bdevs": 3, 00:18:14.925 "num_base_bdevs_discovered": 1, 00:18:14.925 "num_base_bdevs_operational": 3, 00:18:14.925 "base_bdevs_list": [ 00:18:14.925 { 00:18:14.925 "name": "BaseBdev1", 00:18:14.925 "uuid": "516cd28f-2957-4827-82a8-61de7c5b56d3", 00:18:14.925 "is_configured": true, 00:18:14.925 "data_offset": 0, 00:18:14.925 "data_size": 65536 00:18:14.925 }, 00:18:14.925 { 00:18:14.925 "name": "BaseBdev2", 00:18:14.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.925 "is_configured": false, 00:18:14.925 "data_offset": 0, 00:18:14.925 "data_size": 0 00:18:14.925 }, 00:18:14.925 { 00:18:14.925 "name": "BaseBdev3", 00:18:14.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.925 "is_configured": false, 00:18:14.925 "data_offset": 0, 00:18:14.925 "data_size": 0 00:18:14.925 } 00:18:14.925 ] 00:18:14.925 }' 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.925 08:50:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.491 [2024-11-27 08:50:12.076072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:15.491 [2024-11-27 08:50:12.076156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.491 [2024-11-27 08:50:12.084165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.491 [2024-11-27 08:50:12.086862] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.491 [2024-11-27 08:50:12.086925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.491 [2024-11-27 08:50:12.086943] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:15.491 [2024-11-27 08:50:12.086958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.491 "name": "Existed_Raid", 00:18:15.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.491 "strip_size_kb": 64, 00:18:15.491 "state": "configuring", 00:18:15.491 "raid_level": "raid5f", 00:18:15.491 "superblock": false, 00:18:15.491 "num_base_bdevs": 3, 00:18:15.491 "num_base_bdevs_discovered": 1, 00:18:15.491 "num_base_bdevs_operational": 3, 00:18:15.491 "base_bdevs_list": [ 00:18:15.491 { 00:18:15.491 "name": "BaseBdev1", 00:18:15.491 "uuid": "516cd28f-2957-4827-82a8-61de7c5b56d3", 00:18:15.491 "is_configured": true, 00:18:15.491 "data_offset": 0, 00:18:15.491 "data_size": 65536 00:18:15.491 }, 00:18:15.491 { 00:18:15.491 "name": "BaseBdev2", 00:18:15.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.491 "is_configured": false, 00:18:15.491 "data_offset": 0, 00:18:15.491 "data_size": 0 00:18:15.491 }, 00:18:15.491 { 00:18:15.491 "name": "BaseBdev3", 00:18:15.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.491 "is_configured": false, 00:18:15.491 "data_offset": 0, 00:18:15.491 "data_size": 0 00:18:15.491 } 00:18:15.491 ] 00:18:15.491 }' 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.491 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.058 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:16.058 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.058 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.058 [2024-11-27 08:50:12.670320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.059 BaseBdev2 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.059 [ 00:18:16.059 { 00:18:16.059 "name": "BaseBdev2", 00:18:16.059 "aliases": [ 00:18:16.059 "48fa0823-95c2-4a6e-83e6-6fc43a3e3848" 00:18:16.059 ], 00:18:16.059 "product_name": "Malloc disk", 00:18:16.059 "block_size": 512, 00:18:16.059 "num_blocks": 65536, 00:18:16.059 "uuid": "48fa0823-95c2-4a6e-83e6-6fc43a3e3848", 00:18:16.059 "assigned_rate_limits": { 00:18:16.059 "rw_ios_per_sec": 0, 00:18:16.059 "rw_mbytes_per_sec": 0, 00:18:16.059 "r_mbytes_per_sec": 0, 00:18:16.059 "w_mbytes_per_sec": 0 00:18:16.059 }, 00:18:16.059 "claimed": true, 00:18:16.059 "claim_type": "exclusive_write", 00:18:16.059 "zoned": false, 00:18:16.059 "supported_io_types": { 00:18:16.059 "read": true, 00:18:16.059 "write": true, 00:18:16.059 "unmap": true, 00:18:16.059 "flush": true, 00:18:16.059 "reset": true, 00:18:16.059 "nvme_admin": false, 00:18:16.059 "nvme_io": false, 00:18:16.059 "nvme_io_md": false, 00:18:16.059 "write_zeroes": true, 00:18:16.059 "zcopy": true, 00:18:16.059 "get_zone_info": false, 00:18:16.059 "zone_management": false, 00:18:16.059 "zone_append": false, 00:18:16.059 "compare": false, 00:18:16.059 "compare_and_write": false, 00:18:16.059 "abort": true, 00:18:16.059 "seek_hole": false, 00:18:16.059 "seek_data": false, 00:18:16.059 "copy": true, 00:18:16.059 "nvme_iov_md": false 00:18:16.059 }, 00:18:16.059 "memory_domains": [ 00:18:16.059 { 00:18:16.059 "dma_device_id": "system", 00:18:16.059 "dma_device_type": 1 00:18:16.059 }, 00:18:16.059 { 00:18:16.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.059 "dma_device_type": 2 00:18:16.059 } 00:18:16.059 ], 00:18:16.059 "driver_specific": {} 00:18:16.059 } 00:18:16.059 ] 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.059 "name": "Existed_Raid", 00:18:16.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.059 "strip_size_kb": 64, 00:18:16.059 "state": "configuring", 00:18:16.059 "raid_level": "raid5f", 00:18:16.059 "superblock": false, 00:18:16.059 "num_base_bdevs": 3, 00:18:16.059 "num_base_bdevs_discovered": 2, 00:18:16.059 "num_base_bdevs_operational": 3, 00:18:16.059 "base_bdevs_list": [ 00:18:16.059 { 00:18:16.059 "name": "BaseBdev1", 00:18:16.059 "uuid": "516cd28f-2957-4827-82a8-61de7c5b56d3", 00:18:16.059 "is_configured": true, 00:18:16.059 "data_offset": 0, 00:18:16.059 "data_size": 65536 00:18:16.059 }, 00:18:16.059 { 00:18:16.059 "name": "BaseBdev2", 00:18:16.059 "uuid": "48fa0823-95c2-4a6e-83e6-6fc43a3e3848", 00:18:16.059 "is_configured": true, 00:18:16.059 "data_offset": 0, 00:18:16.059 "data_size": 65536 00:18:16.059 }, 00:18:16.059 { 00:18:16.059 "name": "BaseBdev3", 00:18:16.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.059 "is_configured": false, 00:18:16.059 "data_offset": 0, 00:18:16.059 "data_size": 0 00:18:16.059 } 00:18:16.059 ] 00:18:16.059 }' 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.059 08:50:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.626 [2024-11-27 08:50:13.282235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:16.626 [2024-11-27 08:50:13.282415] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:16.626 [2024-11-27 08:50:13.282445] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:16.626 [2024-11-27 08:50:13.283000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:16.626 [2024-11-27 08:50:13.288487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:16.626 [2024-11-27 08:50:13.288538] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:16.626 [2024-11-27 08:50:13.288995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.626 BaseBdev3 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.626 [ 00:18:16.626 { 00:18:16.626 "name": "BaseBdev3", 00:18:16.626 "aliases": [ 00:18:16.626 "0e4c1273-a55b-48c7-a5bc-726aa8dcb147" 00:18:16.626 ], 00:18:16.626 "product_name": "Malloc disk", 00:18:16.626 "block_size": 512, 00:18:16.626 "num_blocks": 65536, 00:18:16.626 "uuid": "0e4c1273-a55b-48c7-a5bc-726aa8dcb147", 00:18:16.626 "assigned_rate_limits": { 00:18:16.626 "rw_ios_per_sec": 0, 00:18:16.626 "rw_mbytes_per_sec": 0, 00:18:16.626 "r_mbytes_per_sec": 0, 00:18:16.626 "w_mbytes_per_sec": 0 00:18:16.626 }, 00:18:16.626 "claimed": true, 00:18:16.626 "claim_type": "exclusive_write", 00:18:16.626 "zoned": false, 00:18:16.626 "supported_io_types": { 00:18:16.626 "read": true, 00:18:16.626 "write": true, 00:18:16.626 "unmap": true, 00:18:16.626 "flush": true, 00:18:16.626 "reset": true, 00:18:16.626 "nvme_admin": false, 00:18:16.626 "nvme_io": false, 00:18:16.626 "nvme_io_md": false, 00:18:16.626 "write_zeroes": true, 00:18:16.626 "zcopy": true, 00:18:16.626 "get_zone_info": false, 00:18:16.626 "zone_management": false, 00:18:16.626 "zone_append": false, 00:18:16.626 "compare": false, 00:18:16.626 "compare_and_write": false, 00:18:16.626 "abort": true, 00:18:16.626 "seek_hole": false, 00:18:16.626 "seek_data": false, 00:18:16.626 "copy": true, 00:18:16.626 "nvme_iov_md": false 00:18:16.626 }, 00:18:16.626 "memory_domains": [ 00:18:16.626 { 00:18:16.626 "dma_device_id": "system", 00:18:16.626 "dma_device_type": 1 00:18:16.626 }, 00:18:16.626 { 00:18:16.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.626 "dma_device_type": 2 00:18:16.626 } 00:18:16.626 ], 00:18:16.626 "driver_specific": {} 00:18:16.626 } 00:18:16.626 ] 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.626 "name": "Existed_Raid", 00:18:16.626 "uuid": "133550db-ef24-4b43-bcfb-c123f47eff85", 00:18:16.626 "strip_size_kb": 64, 00:18:16.626 "state": "online", 00:18:16.626 "raid_level": "raid5f", 00:18:16.626 "superblock": false, 00:18:16.626 "num_base_bdevs": 3, 00:18:16.626 "num_base_bdevs_discovered": 3, 00:18:16.626 "num_base_bdevs_operational": 3, 00:18:16.626 "base_bdevs_list": [ 00:18:16.626 { 00:18:16.626 "name": "BaseBdev1", 00:18:16.626 "uuid": "516cd28f-2957-4827-82a8-61de7c5b56d3", 00:18:16.626 "is_configured": true, 00:18:16.626 "data_offset": 0, 00:18:16.626 "data_size": 65536 00:18:16.626 }, 00:18:16.626 { 00:18:16.626 "name": "BaseBdev2", 00:18:16.626 "uuid": "48fa0823-95c2-4a6e-83e6-6fc43a3e3848", 00:18:16.626 "is_configured": true, 00:18:16.626 "data_offset": 0, 00:18:16.626 "data_size": 65536 00:18:16.626 }, 00:18:16.626 { 00:18:16.626 "name": "BaseBdev3", 00:18:16.626 "uuid": "0e4c1273-a55b-48c7-a5bc-726aa8dcb147", 00:18:16.626 "is_configured": true, 00:18:16.626 "data_offset": 0, 00:18:16.626 "data_size": 65536 00:18:16.626 } 00:18:16.626 ] 00:18:16.626 }' 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.626 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.194 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:17.194 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:17.194 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:17.194 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:17.194 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:17.194 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:17.194 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:17.195 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.195 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.195 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:17.195 [2024-11-27 08:50:13.851565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.195 08:50:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.195 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:17.195 "name": "Existed_Raid", 00:18:17.195 "aliases": [ 00:18:17.195 "133550db-ef24-4b43-bcfb-c123f47eff85" 00:18:17.195 ], 00:18:17.195 "product_name": "Raid Volume", 00:18:17.195 "block_size": 512, 00:18:17.195 "num_blocks": 131072, 00:18:17.195 "uuid": "133550db-ef24-4b43-bcfb-c123f47eff85", 00:18:17.195 "assigned_rate_limits": { 00:18:17.195 "rw_ios_per_sec": 0, 00:18:17.195 "rw_mbytes_per_sec": 0, 00:18:17.195 "r_mbytes_per_sec": 0, 00:18:17.195 "w_mbytes_per_sec": 0 00:18:17.195 }, 00:18:17.195 "claimed": false, 00:18:17.195 "zoned": false, 00:18:17.195 "supported_io_types": { 00:18:17.195 "read": true, 00:18:17.195 "write": true, 00:18:17.195 "unmap": false, 00:18:17.195 "flush": false, 00:18:17.195 "reset": true, 00:18:17.195 "nvme_admin": false, 00:18:17.195 "nvme_io": false, 00:18:17.195 "nvme_io_md": false, 00:18:17.195 "write_zeroes": true, 00:18:17.195 "zcopy": false, 00:18:17.195 "get_zone_info": false, 00:18:17.195 "zone_management": false, 00:18:17.195 "zone_append": false, 00:18:17.195 "compare": false, 00:18:17.195 "compare_and_write": false, 00:18:17.195 "abort": false, 00:18:17.195 "seek_hole": false, 00:18:17.195 "seek_data": false, 00:18:17.195 "copy": false, 00:18:17.195 "nvme_iov_md": false 00:18:17.195 }, 00:18:17.195 "driver_specific": { 00:18:17.195 "raid": { 00:18:17.195 "uuid": "133550db-ef24-4b43-bcfb-c123f47eff85", 00:18:17.195 "strip_size_kb": 64, 00:18:17.195 "state": "online", 00:18:17.195 "raid_level": "raid5f", 00:18:17.195 "superblock": false, 00:18:17.195 "num_base_bdevs": 3, 00:18:17.195 "num_base_bdevs_discovered": 3, 00:18:17.195 "num_base_bdevs_operational": 3, 00:18:17.195 "base_bdevs_list": [ 00:18:17.195 { 00:18:17.195 "name": "BaseBdev1", 00:18:17.195 "uuid": "516cd28f-2957-4827-82a8-61de7c5b56d3", 00:18:17.195 "is_configured": true, 00:18:17.195 "data_offset": 0, 00:18:17.195 "data_size": 65536 00:18:17.195 }, 00:18:17.195 { 00:18:17.195 "name": "BaseBdev2", 00:18:17.195 "uuid": "48fa0823-95c2-4a6e-83e6-6fc43a3e3848", 00:18:17.195 "is_configured": true, 00:18:17.195 "data_offset": 0, 00:18:17.195 "data_size": 65536 00:18:17.195 }, 00:18:17.195 { 00:18:17.195 "name": "BaseBdev3", 00:18:17.195 "uuid": "0e4c1273-a55b-48c7-a5bc-726aa8dcb147", 00:18:17.195 "is_configured": true, 00:18:17.195 "data_offset": 0, 00:18:17.195 "data_size": 65536 00:18:17.195 } 00:18:17.195 ] 00:18:17.195 } 00:18:17.195 } 00:18:17.195 }' 00:18:17.195 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:17.195 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:17.195 BaseBdev2 00:18:17.195 BaseBdev3' 00:18:17.195 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.454 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:17.454 08:50:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.454 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.454 [2024-11-27 08:50:14.167418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.713 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.714 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.714 "name": "Existed_Raid", 00:18:17.714 "uuid": "133550db-ef24-4b43-bcfb-c123f47eff85", 00:18:17.714 "strip_size_kb": 64, 00:18:17.714 "state": "online", 00:18:17.714 "raid_level": "raid5f", 00:18:17.714 "superblock": false, 00:18:17.714 "num_base_bdevs": 3, 00:18:17.714 "num_base_bdevs_discovered": 2, 00:18:17.714 "num_base_bdevs_operational": 2, 00:18:17.714 "base_bdevs_list": [ 00:18:17.714 { 00:18:17.714 "name": null, 00:18:17.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.714 "is_configured": false, 00:18:17.714 "data_offset": 0, 00:18:17.714 "data_size": 65536 00:18:17.714 }, 00:18:17.714 { 00:18:17.714 "name": "BaseBdev2", 00:18:17.714 "uuid": "48fa0823-95c2-4a6e-83e6-6fc43a3e3848", 00:18:17.714 "is_configured": true, 00:18:17.714 "data_offset": 0, 00:18:17.714 "data_size": 65536 00:18:17.714 }, 00:18:17.714 { 00:18:17.714 "name": "BaseBdev3", 00:18:17.714 "uuid": "0e4c1273-a55b-48c7-a5bc-726aa8dcb147", 00:18:17.714 "is_configured": true, 00:18:17.714 "data_offset": 0, 00:18:17.714 "data_size": 65536 00:18:17.714 } 00:18:17.714 ] 00:18:17.714 }' 00:18:17.714 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.714 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.280 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:18.280 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.281 [2024-11-27 08:50:14.839266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:18.281 [2024-11-27 08:50:14.839443] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.281 [2024-11-27 08:50:14.932582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.281 08:50:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.281 [2024-11-27 08:50:14.992694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:18.281 [2024-11-27 08:50:14.992815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.538 BaseBdev2 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.538 [ 00:18:18.538 { 00:18:18.538 "name": "BaseBdev2", 00:18:18.538 "aliases": [ 00:18:18.538 "dae5564e-95db-49c5-a1b0-132bf0ce7bf8" 00:18:18.538 ], 00:18:18.538 "product_name": "Malloc disk", 00:18:18.538 "block_size": 512, 00:18:18.538 "num_blocks": 65536, 00:18:18.538 "uuid": "dae5564e-95db-49c5-a1b0-132bf0ce7bf8", 00:18:18.538 "assigned_rate_limits": { 00:18:18.538 "rw_ios_per_sec": 0, 00:18:18.538 "rw_mbytes_per_sec": 0, 00:18:18.538 "r_mbytes_per_sec": 0, 00:18:18.538 "w_mbytes_per_sec": 0 00:18:18.538 }, 00:18:18.538 "claimed": false, 00:18:18.538 "zoned": false, 00:18:18.538 "supported_io_types": { 00:18:18.538 "read": true, 00:18:18.538 "write": true, 00:18:18.538 "unmap": true, 00:18:18.538 "flush": true, 00:18:18.538 "reset": true, 00:18:18.538 "nvme_admin": false, 00:18:18.538 "nvme_io": false, 00:18:18.538 "nvme_io_md": false, 00:18:18.538 "write_zeroes": true, 00:18:18.538 "zcopy": true, 00:18:18.538 "get_zone_info": false, 00:18:18.538 "zone_management": false, 00:18:18.538 "zone_append": false, 00:18:18.538 "compare": false, 00:18:18.538 "compare_and_write": false, 00:18:18.538 "abort": true, 00:18:18.538 "seek_hole": false, 00:18:18.538 "seek_data": false, 00:18:18.538 "copy": true, 00:18:18.538 "nvme_iov_md": false 00:18:18.538 }, 00:18:18.538 "memory_domains": [ 00:18:18.538 { 00:18:18.538 "dma_device_id": "system", 00:18:18.538 "dma_device_type": 1 00:18:18.538 }, 00:18:18.538 { 00:18:18.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.538 "dma_device_type": 2 00:18:18.538 } 00:18:18.538 ], 00:18:18.538 "driver_specific": {} 00:18:18.538 } 00:18:18.538 ] 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.538 BaseBdev3 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.538 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.538 [ 00:18:18.538 { 00:18:18.538 "name": "BaseBdev3", 00:18:18.538 "aliases": [ 00:18:18.538 "d06b6c69-395d-4066-8d9a-e7203d7b39ab" 00:18:18.538 ], 00:18:18.538 "product_name": "Malloc disk", 00:18:18.538 "block_size": 512, 00:18:18.538 "num_blocks": 65536, 00:18:18.538 "uuid": "d06b6c69-395d-4066-8d9a-e7203d7b39ab", 00:18:18.538 "assigned_rate_limits": { 00:18:18.538 "rw_ios_per_sec": 0, 00:18:18.538 "rw_mbytes_per_sec": 0, 00:18:18.538 "r_mbytes_per_sec": 0, 00:18:18.538 "w_mbytes_per_sec": 0 00:18:18.538 }, 00:18:18.538 "claimed": false, 00:18:18.538 "zoned": false, 00:18:18.538 "supported_io_types": { 00:18:18.538 "read": true, 00:18:18.538 "write": true, 00:18:18.538 "unmap": true, 00:18:18.538 "flush": true, 00:18:18.538 "reset": true, 00:18:18.538 "nvme_admin": false, 00:18:18.538 "nvme_io": false, 00:18:18.538 "nvme_io_md": false, 00:18:18.538 "write_zeroes": true, 00:18:18.538 "zcopy": true, 00:18:18.538 "get_zone_info": false, 00:18:18.538 "zone_management": false, 00:18:18.538 "zone_append": false, 00:18:18.538 "compare": false, 00:18:18.538 "compare_and_write": false, 00:18:18.538 "abort": true, 00:18:18.538 "seek_hole": false, 00:18:18.538 "seek_data": false, 00:18:18.538 "copy": true, 00:18:18.538 "nvme_iov_md": false 00:18:18.538 }, 00:18:18.538 "memory_domains": [ 00:18:18.538 { 00:18:18.538 "dma_device_id": "system", 00:18:18.795 "dma_device_type": 1 00:18:18.795 }, 00:18:18.795 { 00:18:18.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.795 "dma_device_type": 2 00:18:18.795 } 00:18:18.795 ], 00:18:18.795 "driver_specific": {} 00:18:18.795 } 00:18:18.795 ] 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.795 [2024-11-27 08:50:15.302549] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:18.795 [2024-11-27 08:50:15.302614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:18.795 [2024-11-27 08:50:15.302663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.795 [2024-11-27 08:50:15.305251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.795 "name": "Existed_Raid", 00:18:18.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.795 "strip_size_kb": 64, 00:18:18.795 "state": "configuring", 00:18:18.795 "raid_level": "raid5f", 00:18:18.795 "superblock": false, 00:18:18.795 "num_base_bdevs": 3, 00:18:18.795 "num_base_bdevs_discovered": 2, 00:18:18.795 "num_base_bdevs_operational": 3, 00:18:18.795 "base_bdevs_list": [ 00:18:18.795 { 00:18:18.795 "name": "BaseBdev1", 00:18:18.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.795 "is_configured": false, 00:18:18.795 "data_offset": 0, 00:18:18.795 "data_size": 0 00:18:18.795 }, 00:18:18.795 { 00:18:18.795 "name": "BaseBdev2", 00:18:18.795 "uuid": "dae5564e-95db-49c5-a1b0-132bf0ce7bf8", 00:18:18.795 "is_configured": true, 00:18:18.795 "data_offset": 0, 00:18:18.795 "data_size": 65536 00:18:18.795 }, 00:18:18.795 { 00:18:18.795 "name": "BaseBdev3", 00:18:18.795 "uuid": "d06b6c69-395d-4066-8d9a-e7203d7b39ab", 00:18:18.795 "is_configured": true, 00:18:18.795 "data_offset": 0, 00:18:18.795 "data_size": 65536 00:18:18.795 } 00:18:18.795 ] 00:18:18.795 }' 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.795 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.361 [2024-11-27 08:50:15.838760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.361 "name": "Existed_Raid", 00:18:19.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.361 "strip_size_kb": 64, 00:18:19.361 "state": "configuring", 00:18:19.361 "raid_level": "raid5f", 00:18:19.361 "superblock": false, 00:18:19.361 "num_base_bdevs": 3, 00:18:19.361 "num_base_bdevs_discovered": 1, 00:18:19.361 "num_base_bdevs_operational": 3, 00:18:19.361 "base_bdevs_list": [ 00:18:19.361 { 00:18:19.361 "name": "BaseBdev1", 00:18:19.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.361 "is_configured": false, 00:18:19.361 "data_offset": 0, 00:18:19.361 "data_size": 0 00:18:19.361 }, 00:18:19.361 { 00:18:19.361 "name": null, 00:18:19.361 "uuid": "dae5564e-95db-49c5-a1b0-132bf0ce7bf8", 00:18:19.361 "is_configured": false, 00:18:19.361 "data_offset": 0, 00:18:19.361 "data_size": 65536 00:18:19.361 }, 00:18:19.361 { 00:18:19.361 "name": "BaseBdev3", 00:18:19.361 "uuid": "d06b6c69-395d-4066-8d9a-e7203d7b39ab", 00:18:19.361 "is_configured": true, 00:18:19.361 "data_offset": 0, 00:18:19.361 "data_size": 65536 00:18:19.361 } 00:18:19.361 ] 00:18:19.361 }' 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.361 08:50:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.926 [2024-11-27 08:50:16.508268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.926 BaseBdev1 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.926 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.926 [ 00:18:19.926 { 00:18:19.926 "name": "BaseBdev1", 00:18:19.926 "aliases": [ 00:18:19.926 "e4d5732a-6fba-4e7f-8a4f-c0e98b8d4593" 00:18:19.926 ], 00:18:19.926 "product_name": "Malloc disk", 00:18:19.926 "block_size": 512, 00:18:19.926 "num_blocks": 65536, 00:18:19.926 "uuid": "e4d5732a-6fba-4e7f-8a4f-c0e98b8d4593", 00:18:19.926 "assigned_rate_limits": { 00:18:19.926 "rw_ios_per_sec": 0, 00:18:19.926 "rw_mbytes_per_sec": 0, 00:18:19.926 "r_mbytes_per_sec": 0, 00:18:19.926 "w_mbytes_per_sec": 0 00:18:19.926 }, 00:18:19.926 "claimed": true, 00:18:19.926 "claim_type": "exclusive_write", 00:18:19.926 "zoned": false, 00:18:19.926 "supported_io_types": { 00:18:19.926 "read": true, 00:18:19.926 "write": true, 00:18:19.926 "unmap": true, 00:18:19.926 "flush": true, 00:18:19.926 "reset": true, 00:18:19.926 "nvme_admin": false, 00:18:19.926 "nvme_io": false, 00:18:19.926 "nvme_io_md": false, 00:18:19.926 "write_zeroes": true, 00:18:19.926 "zcopy": true, 00:18:19.926 "get_zone_info": false, 00:18:19.926 "zone_management": false, 00:18:19.926 "zone_append": false, 00:18:19.926 "compare": false, 00:18:19.926 "compare_and_write": false, 00:18:19.926 "abort": true, 00:18:19.927 "seek_hole": false, 00:18:19.927 "seek_data": false, 00:18:19.927 "copy": true, 00:18:19.927 "nvme_iov_md": false 00:18:19.927 }, 00:18:19.927 "memory_domains": [ 00:18:19.927 { 00:18:19.927 "dma_device_id": "system", 00:18:19.927 "dma_device_type": 1 00:18:19.927 }, 00:18:19.927 { 00:18:19.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.927 "dma_device_type": 2 00:18:19.927 } 00:18:19.927 ], 00:18:19.927 "driver_specific": {} 00:18:19.927 } 00:18:19.927 ] 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.927 "name": "Existed_Raid", 00:18:19.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.927 "strip_size_kb": 64, 00:18:19.927 "state": "configuring", 00:18:19.927 "raid_level": "raid5f", 00:18:19.927 "superblock": false, 00:18:19.927 "num_base_bdevs": 3, 00:18:19.927 "num_base_bdevs_discovered": 2, 00:18:19.927 "num_base_bdevs_operational": 3, 00:18:19.927 "base_bdevs_list": [ 00:18:19.927 { 00:18:19.927 "name": "BaseBdev1", 00:18:19.927 "uuid": "e4d5732a-6fba-4e7f-8a4f-c0e98b8d4593", 00:18:19.927 "is_configured": true, 00:18:19.927 "data_offset": 0, 00:18:19.927 "data_size": 65536 00:18:19.927 }, 00:18:19.927 { 00:18:19.927 "name": null, 00:18:19.927 "uuid": "dae5564e-95db-49c5-a1b0-132bf0ce7bf8", 00:18:19.927 "is_configured": false, 00:18:19.927 "data_offset": 0, 00:18:19.927 "data_size": 65536 00:18:19.927 }, 00:18:19.927 { 00:18:19.927 "name": "BaseBdev3", 00:18:19.927 "uuid": "d06b6c69-395d-4066-8d9a-e7203d7b39ab", 00:18:19.927 "is_configured": true, 00:18:19.927 "data_offset": 0, 00:18:19.927 "data_size": 65536 00:18:19.927 } 00:18:19.927 ] 00:18:19.927 }' 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.927 08:50:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.494 [2024-11-27 08:50:17.140508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.494 "name": "Existed_Raid", 00:18:20.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.494 "strip_size_kb": 64, 00:18:20.494 "state": "configuring", 00:18:20.494 "raid_level": "raid5f", 00:18:20.494 "superblock": false, 00:18:20.494 "num_base_bdevs": 3, 00:18:20.494 "num_base_bdevs_discovered": 1, 00:18:20.494 "num_base_bdevs_operational": 3, 00:18:20.494 "base_bdevs_list": [ 00:18:20.494 { 00:18:20.494 "name": "BaseBdev1", 00:18:20.494 "uuid": "e4d5732a-6fba-4e7f-8a4f-c0e98b8d4593", 00:18:20.494 "is_configured": true, 00:18:20.494 "data_offset": 0, 00:18:20.494 "data_size": 65536 00:18:20.494 }, 00:18:20.494 { 00:18:20.494 "name": null, 00:18:20.494 "uuid": "dae5564e-95db-49c5-a1b0-132bf0ce7bf8", 00:18:20.494 "is_configured": false, 00:18:20.494 "data_offset": 0, 00:18:20.494 "data_size": 65536 00:18:20.494 }, 00:18:20.494 { 00:18:20.494 "name": null, 00:18:20.494 "uuid": "d06b6c69-395d-4066-8d9a-e7203d7b39ab", 00:18:20.494 "is_configured": false, 00:18:20.494 "data_offset": 0, 00:18:20.494 "data_size": 65536 00:18:20.494 } 00:18:20.494 ] 00:18:20.494 }' 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.494 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.060 [2024-11-27 08:50:17.756715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.060 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.318 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.318 "name": "Existed_Raid", 00:18:21.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.318 "strip_size_kb": 64, 00:18:21.318 "state": "configuring", 00:18:21.318 "raid_level": "raid5f", 00:18:21.318 "superblock": false, 00:18:21.318 "num_base_bdevs": 3, 00:18:21.318 "num_base_bdevs_discovered": 2, 00:18:21.318 "num_base_bdevs_operational": 3, 00:18:21.318 "base_bdevs_list": [ 00:18:21.318 { 00:18:21.318 "name": "BaseBdev1", 00:18:21.318 "uuid": "e4d5732a-6fba-4e7f-8a4f-c0e98b8d4593", 00:18:21.318 "is_configured": true, 00:18:21.318 "data_offset": 0, 00:18:21.318 "data_size": 65536 00:18:21.318 }, 00:18:21.318 { 00:18:21.318 "name": null, 00:18:21.318 "uuid": "dae5564e-95db-49c5-a1b0-132bf0ce7bf8", 00:18:21.318 "is_configured": false, 00:18:21.318 "data_offset": 0, 00:18:21.318 "data_size": 65536 00:18:21.318 }, 00:18:21.318 { 00:18:21.318 "name": "BaseBdev3", 00:18:21.318 "uuid": "d06b6c69-395d-4066-8d9a-e7203d7b39ab", 00:18:21.318 "is_configured": true, 00:18:21.318 "data_offset": 0, 00:18:21.318 "data_size": 65536 00:18:21.318 } 00:18:21.318 ] 00:18:21.318 }' 00:18:21.318 08:50:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.318 08:50:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.576 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.576 08:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.576 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:21.576 08:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.576 08:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.576 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:21.576 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:21.576 08:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.576 08:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.835 [2024-11-27 08:50:18.336890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.835 "name": "Existed_Raid", 00:18:21.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.835 "strip_size_kb": 64, 00:18:21.835 "state": "configuring", 00:18:21.835 "raid_level": "raid5f", 00:18:21.835 "superblock": false, 00:18:21.835 "num_base_bdevs": 3, 00:18:21.835 "num_base_bdevs_discovered": 1, 00:18:21.835 "num_base_bdevs_operational": 3, 00:18:21.835 "base_bdevs_list": [ 00:18:21.835 { 00:18:21.835 "name": null, 00:18:21.835 "uuid": "e4d5732a-6fba-4e7f-8a4f-c0e98b8d4593", 00:18:21.835 "is_configured": false, 00:18:21.835 "data_offset": 0, 00:18:21.835 "data_size": 65536 00:18:21.835 }, 00:18:21.835 { 00:18:21.835 "name": null, 00:18:21.835 "uuid": "dae5564e-95db-49c5-a1b0-132bf0ce7bf8", 00:18:21.835 "is_configured": false, 00:18:21.835 "data_offset": 0, 00:18:21.835 "data_size": 65536 00:18:21.835 }, 00:18:21.835 { 00:18:21.835 "name": "BaseBdev3", 00:18:21.835 "uuid": "d06b6c69-395d-4066-8d9a-e7203d7b39ab", 00:18:21.835 "is_configured": true, 00:18:21.835 "data_offset": 0, 00:18:21.835 "data_size": 65536 00:18:21.835 } 00:18:21.835 ] 00:18:21.835 }' 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.835 08:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.401 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.401 08:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.401 08:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.401 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:22.401 08:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.401 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:22.401 08:50:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:22.401 08:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.401 08:50:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.401 [2024-11-27 08:50:19.003261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.401 "name": "Existed_Raid", 00:18:22.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.401 "strip_size_kb": 64, 00:18:22.401 "state": "configuring", 00:18:22.401 "raid_level": "raid5f", 00:18:22.401 "superblock": false, 00:18:22.401 "num_base_bdevs": 3, 00:18:22.401 "num_base_bdevs_discovered": 2, 00:18:22.401 "num_base_bdevs_operational": 3, 00:18:22.401 "base_bdevs_list": [ 00:18:22.401 { 00:18:22.401 "name": null, 00:18:22.401 "uuid": "e4d5732a-6fba-4e7f-8a4f-c0e98b8d4593", 00:18:22.401 "is_configured": false, 00:18:22.401 "data_offset": 0, 00:18:22.401 "data_size": 65536 00:18:22.401 }, 00:18:22.401 { 00:18:22.401 "name": "BaseBdev2", 00:18:22.401 "uuid": "dae5564e-95db-49c5-a1b0-132bf0ce7bf8", 00:18:22.401 "is_configured": true, 00:18:22.401 "data_offset": 0, 00:18:22.401 "data_size": 65536 00:18:22.401 }, 00:18:22.401 { 00:18:22.401 "name": "BaseBdev3", 00:18:22.401 "uuid": "d06b6c69-395d-4066-8d9a-e7203d7b39ab", 00:18:22.401 "is_configured": true, 00:18:22.401 "data_offset": 0, 00:18:22.401 "data_size": 65536 00:18:22.401 } 00:18:22.401 ] 00:18:22.401 }' 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.401 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e4d5732a-6fba-4e7f-8a4f-c0e98b8d4593 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.973 [2024-11-27 08:50:19.680294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:22.973 [2024-11-27 08:50:19.680394] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:22.973 [2024-11-27 08:50:19.680413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:22.973 [2024-11-27 08:50:19.680752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:22.973 [2024-11-27 08:50:19.685793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:22.973 NewBaseBdev 00:18:22.973 [2024-11-27 08:50:19.685962] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:22.973 [2024-11-27 08:50:19.686374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:18:22.973 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.974 [ 00:18:22.974 { 00:18:22.974 "name": "NewBaseBdev", 00:18:22.974 "aliases": [ 00:18:22.974 "e4d5732a-6fba-4e7f-8a4f-c0e98b8d4593" 00:18:22.974 ], 00:18:22.974 "product_name": "Malloc disk", 00:18:22.974 "block_size": 512, 00:18:22.974 "num_blocks": 65536, 00:18:22.974 "uuid": "e4d5732a-6fba-4e7f-8a4f-c0e98b8d4593", 00:18:22.974 "assigned_rate_limits": { 00:18:22.974 "rw_ios_per_sec": 0, 00:18:22.974 "rw_mbytes_per_sec": 0, 00:18:22.974 "r_mbytes_per_sec": 0, 00:18:22.974 "w_mbytes_per_sec": 0 00:18:22.974 }, 00:18:22.974 "claimed": true, 00:18:22.974 "claim_type": "exclusive_write", 00:18:22.974 "zoned": false, 00:18:22.974 "supported_io_types": { 00:18:22.974 "read": true, 00:18:22.974 "write": true, 00:18:22.974 "unmap": true, 00:18:22.974 "flush": true, 00:18:22.974 "reset": true, 00:18:22.974 "nvme_admin": false, 00:18:22.974 "nvme_io": false, 00:18:22.974 "nvme_io_md": false, 00:18:22.974 "write_zeroes": true, 00:18:22.974 "zcopy": true, 00:18:22.974 "get_zone_info": false, 00:18:22.974 "zone_management": false, 00:18:22.974 "zone_append": false, 00:18:22.974 "compare": false, 00:18:22.974 "compare_and_write": false, 00:18:22.974 "abort": true, 00:18:22.974 "seek_hole": false, 00:18:22.974 "seek_data": false, 00:18:22.974 "copy": true, 00:18:22.974 "nvme_iov_md": false 00:18:22.974 }, 00:18:22.974 "memory_domains": [ 00:18:22.974 { 00:18:22.974 "dma_device_id": "system", 00:18:22.974 "dma_device_type": 1 00:18:22.974 }, 00:18:22.974 { 00:18:22.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.974 "dma_device_type": 2 00:18:22.974 } 00:18:22.974 ], 00:18:22.974 "driver_specific": {} 00:18:22.974 } 00:18:22.974 ] 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.974 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.232 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.232 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.232 "name": "Existed_Raid", 00:18:23.232 "uuid": "0ec7277d-3ef9-4201-bb85-47bd2ec225d3", 00:18:23.232 "strip_size_kb": 64, 00:18:23.232 "state": "online", 00:18:23.232 "raid_level": "raid5f", 00:18:23.232 "superblock": false, 00:18:23.232 "num_base_bdevs": 3, 00:18:23.232 "num_base_bdevs_discovered": 3, 00:18:23.232 "num_base_bdevs_operational": 3, 00:18:23.232 "base_bdevs_list": [ 00:18:23.232 { 00:18:23.232 "name": "NewBaseBdev", 00:18:23.232 "uuid": "e4d5732a-6fba-4e7f-8a4f-c0e98b8d4593", 00:18:23.232 "is_configured": true, 00:18:23.232 "data_offset": 0, 00:18:23.232 "data_size": 65536 00:18:23.232 }, 00:18:23.232 { 00:18:23.232 "name": "BaseBdev2", 00:18:23.232 "uuid": "dae5564e-95db-49c5-a1b0-132bf0ce7bf8", 00:18:23.232 "is_configured": true, 00:18:23.232 "data_offset": 0, 00:18:23.232 "data_size": 65536 00:18:23.232 }, 00:18:23.232 { 00:18:23.232 "name": "BaseBdev3", 00:18:23.232 "uuid": "d06b6c69-395d-4066-8d9a-e7203d7b39ab", 00:18:23.232 "is_configured": true, 00:18:23.232 "data_offset": 0, 00:18:23.232 "data_size": 65536 00:18:23.232 } 00:18:23.232 ] 00:18:23.232 }' 00:18:23.232 08:50:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.232 08:50:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.798 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:23.799 [2024-11-27 08:50:20.256812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:23.799 "name": "Existed_Raid", 00:18:23.799 "aliases": [ 00:18:23.799 "0ec7277d-3ef9-4201-bb85-47bd2ec225d3" 00:18:23.799 ], 00:18:23.799 "product_name": "Raid Volume", 00:18:23.799 "block_size": 512, 00:18:23.799 "num_blocks": 131072, 00:18:23.799 "uuid": "0ec7277d-3ef9-4201-bb85-47bd2ec225d3", 00:18:23.799 "assigned_rate_limits": { 00:18:23.799 "rw_ios_per_sec": 0, 00:18:23.799 "rw_mbytes_per_sec": 0, 00:18:23.799 "r_mbytes_per_sec": 0, 00:18:23.799 "w_mbytes_per_sec": 0 00:18:23.799 }, 00:18:23.799 "claimed": false, 00:18:23.799 "zoned": false, 00:18:23.799 "supported_io_types": { 00:18:23.799 "read": true, 00:18:23.799 "write": true, 00:18:23.799 "unmap": false, 00:18:23.799 "flush": false, 00:18:23.799 "reset": true, 00:18:23.799 "nvme_admin": false, 00:18:23.799 "nvme_io": false, 00:18:23.799 "nvme_io_md": false, 00:18:23.799 "write_zeroes": true, 00:18:23.799 "zcopy": false, 00:18:23.799 "get_zone_info": false, 00:18:23.799 "zone_management": false, 00:18:23.799 "zone_append": false, 00:18:23.799 "compare": false, 00:18:23.799 "compare_and_write": false, 00:18:23.799 "abort": false, 00:18:23.799 "seek_hole": false, 00:18:23.799 "seek_data": false, 00:18:23.799 "copy": false, 00:18:23.799 "nvme_iov_md": false 00:18:23.799 }, 00:18:23.799 "driver_specific": { 00:18:23.799 "raid": { 00:18:23.799 "uuid": "0ec7277d-3ef9-4201-bb85-47bd2ec225d3", 00:18:23.799 "strip_size_kb": 64, 00:18:23.799 "state": "online", 00:18:23.799 "raid_level": "raid5f", 00:18:23.799 "superblock": false, 00:18:23.799 "num_base_bdevs": 3, 00:18:23.799 "num_base_bdevs_discovered": 3, 00:18:23.799 "num_base_bdevs_operational": 3, 00:18:23.799 "base_bdevs_list": [ 00:18:23.799 { 00:18:23.799 "name": "NewBaseBdev", 00:18:23.799 "uuid": "e4d5732a-6fba-4e7f-8a4f-c0e98b8d4593", 00:18:23.799 "is_configured": true, 00:18:23.799 "data_offset": 0, 00:18:23.799 "data_size": 65536 00:18:23.799 }, 00:18:23.799 { 00:18:23.799 "name": "BaseBdev2", 00:18:23.799 "uuid": "dae5564e-95db-49c5-a1b0-132bf0ce7bf8", 00:18:23.799 "is_configured": true, 00:18:23.799 "data_offset": 0, 00:18:23.799 "data_size": 65536 00:18:23.799 }, 00:18:23.799 { 00:18:23.799 "name": "BaseBdev3", 00:18:23.799 "uuid": "d06b6c69-395d-4066-8d9a-e7203d7b39ab", 00:18:23.799 "is_configured": true, 00:18:23.799 "data_offset": 0, 00:18:23.799 "data_size": 65536 00:18:23.799 } 00:18:23.799 ] 00:18:23.799 } 00:18:23.799 } 00:18:23.799 }' 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:23.799 BaseBdev2 00:18:23.799 BaseBdev3' 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.799 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.057 [2024-11-27 08:50:20.564667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:24.057 [2024-11-27 08:50:20.564709] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.057 [2024-11-27 08:50:20.564834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.057 [2024-11-27 08:50:20.565281] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.057 [2024-11-27 08:50:20.565312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80363 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # '[' -z 80363 ']' 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # kill -0 80363 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # uname 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 80363 00:18:24.057 killing process with pid 80363 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 80363' 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # kill 80363 00:18:24.057 08:50:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@975 -- # wait 80363 00:18:24.057 [2024-11-27 08:50:20.603083] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.315 [2024-11-27 08:50:20.891397] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.251 ************************************ 00:18:25.251 END TEST raid5f_state_function_test 00:18:25.251 ************************************ 00:18:25.251 08:50:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:25.251 00:18:25.251 real 0m12.279s 00:18:25.251 user 0m20.241s 00:18:25.251 sys 0m1.781s 00:18:25.251 08:50:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:18:25.251 08:50:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.510 08:50:22 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:18:25.510 08:50:22 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:18:25.510 08:50:22 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:18:25.510 08:50:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.510 ************************************ 00:18:25.510 START TEST raid5f_state_function_test_sb 00:18:25.510 ************************************ 00:18:25.510 08:50:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # raid_state_function_test raid5f 3 true 00:18:25.510 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:25.510 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:25.510 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:25.510 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:25.510 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:25.510 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.510 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:25.510 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:25.510 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.510 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:25.510 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:25.510 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.510 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:25.510 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81002 00:18:25.511 Process raid pid: 81002 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81002' 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81002 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@832 -- # '[' -z 81002 ']' 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:25.511 08:50:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.511 [2024-11-27 08:50:22.179812] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:18:25.511 [2024-11-27 08:50:22.180262] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.769 [2024-11-27 08:50:22.371163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.028 [2024-11-27 08:50:22.545172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.286 [2024-11-27 08:50:22.817785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.286 [2024-11-27 08:50:22.817849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@865 -- # return 0 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.545 [2024-11-27 08:50:23.110638] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:26.545 [2024-11-27 08:50:23.110720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:26.545 [2024-11-27 08:50:23.110740] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:26.545 [2024-11-27 08:50:23.110765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:26.545 [2024-11-27 08:50:23.110775] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:26.545 [2024-11-27 08:50:23.110791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.545 "name": "Existed_Raid", 00:18:26.545 "uuid": "6334d536-9b4b-4dc5-b3b2-a5acb8a7b800", 00:18:26.545 "strip_size_kb": 64, 00:18:26.545 "state": "configuring", 00:18:26.545 "raid_level": "raid5f", 00:18:26.545 "superblock": true, 00:18:26.545 "num_base_bdevs": 3, 00:18:26.545 "num_base_bdevs_discovered": 0, 00:18:26.545 "num_base_bdevs_operational": 3, 00:18:26.545 "base_bdevs_list": [ 00:18:26.545 { 00:18:26.545 "name": "BaseBdev1", 00:18:26.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.545 "is_configured": false, 00:18:26.545 "data_offset": 0, 00:18:26.545 "data_size": 0 00:18:26.545 }, 00:18:26.545 { 00:18:26.545 "name": "BaseBdev2", 00:18:26.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.545 "is_configured": false, 00:18:26.545 "data_offset": 0, 00:18:26.545 "data_size": 0 00:18:26.545 }, 00:18:26.545 { 00:18:26.545 "name": "BaseBdev3", 00:18:26.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.545 "is_configured": false, 00:18:26.545 "data_offset": 0, 00:18:26.545 "data_size": 0 00:18:26.545 } 00:18:26.545 ] 00:18:26.545 }' 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.545 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.113 [2024-11-27 08:50:23.650734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:27.113 [2024-11-27 08:50:23.650785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.113 [2024-11-27 08:50:23.658671] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:27.113 [2024-11-27 08:50:23.658734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:27.113 [2024-11-27 08:50:23.658751] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.113 [2024-11-27 08:50:23.658768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.113 [2024-11-27 08:50:23.658778] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:27.113 [2024-11-27 08:50:23.658794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.113 [2024-11-27 08:50:23.707330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.113 BaseBdev1 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.113 [ 00:18:27.113 { 00:18:27.113 "name": "BaseBdev1", 00:18:27.113 "aliases": [ 00:18:27.113 "71585464-c820-4406-9dbf-c8e95df08e28" 00:18:27.113 ], 00:18:27.113 "product_name": "Malloc disk", 00:18:27.113 "block_size": 512, 00:18:27.113 "num_blocks": 65536, 00:18:27.113 "uuid": "71585464-c820-4406-9dbf-c8e95df08e28", 00:18:27.113 "assigned_rate_limits": { 00:18:27.113 "rw_ios_per_sec": 0, 00:18:27.113 "rw_mbytes_per_sec": 0, 00:18:27.113 "r_mbytes_per_sec": 0, 00:18:27.113 "w_mbytes_per_sec": 0 00:18:27.113 }, 00:18:27.113 "claimed": true, 00:18:27.113 "claim_type": "exclusive_write", 00:18:27.113 "zoned": false, 00:18:27.113 "supported_io_types": { 00:18:27.113 "read": true, 00:18:27.113 "write": true, 00:18:27.113 "unmap": true, 00:18:27.113 "flush": true, 00:18:27.113 "reset": true, 00:18:27.113 "nvme_admin": false, 00:18:27.113 "nvme_io": false, 00:18:27.113 "nvme_io_md": false, 00:18:27.113 "write_zeroes": true, 00:18:27.113 "zcopy": true, 00:18:27.113 "get_zone_info": false, 00:18:27.113 "zone_management": false, 00:18:27.113 "zone_append": false, 00:18:27.113 "compare": false, 00:18:27.113 "compare_and_write": false, 00:18:27.113 "abort": true, 00:18:27.113 "seek_hole": false, 00:18:27.113 "seek_data": false, 00:18:27.113 "copy": true, 00:18:27.113 "nvme_iov_md": false 00:18:27.113 }, 00:18:27.113 "memory_domains": [ 00:18:27.113 { 00:18:27.113 "dma_device_id": "system", 00:18:27.113 "dma_device_type": 1 00:18:27.113 }, 00:18:27.113 { 00:18:27.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.113 "dma_device_type": 2 00:18:27.113 } 00:18:27.113 ], 00:18:27.113 "driver_specific": {} 00:18:27.113 } 00:18:27.113 ] 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.113 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.113 "name": "Existed_Raid", 00:18:27.113 "uuid": "0a830014-c9de-46bc-9dfe-f08b12b4075a", 00:18:27.113 "strip_size_kb": 64, 00:18:27.113 "state": "configuring", 00:18:27.113 "raid_level": "raid5f", 00:18:27.113 "superblock": true, 00:18:27.113 "num_base_bdevs": 3, 00:18:27.113 "num_base_bdevs_discovered": 1, 00:18:27.113 "num_base_bdevs_operational": 3, 00:18:27.113 "base_bdevs_list": [ 00:18:27.113 { 00:18:27.113 "name": "BaseBdev1", 00:18:27.113 "uuid": "71585464-c820-4406-9dbf-c8e95df08e28", 00:18:27.113 "is_configured": true, 00:18:27.113 "data_offset": 2048, 00:18:27.113 "data_size": 63488 00:18:27.113 }, 00:18:27.113 { 00:18:27.113 "name": "BaseBdev2", 00:18:27.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.113 "is_configured": false, 00:18:27.113 "data_offset": 0, 00:18:27.113 "data_size": 0 00:18:27.113 }, 00:18:27.113 { 00:18:27.113 "name": "BaseBdev3", 00:18:27.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.113 "is_configured": false, 00:18:27.113 "data_offset": 0, 00:18:27.113 "data_size": 0 00:18:27.113 } 00:18:27.114 ] 00:18:27.114 }' 00:18:27.114 08:50:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.114 08:50:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.680 [2024-11-27 08:50:24.259550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:27.680 [2024-11-27 08:50:24.259640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.680 [2024-11-27 08:50:24.267605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.680 [2024-11-27 08:50:24.270263] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.680 [2024-11-27 08:50:24.270322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.680 [2024-11-27 08:50:24.270377] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:27.680 [2024-11-27 08:50:24.270398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.680 "name": "Existed_Raid", 00:18:27.680 "uuid": "776811c5-dc13-4ca8-a457-7c5578978217", 00:18:27.680 "strip_size_kb": 64, 00:18:27.680 "state": "configuring", 00:18:27.680 "raid_level": "raid5f", 00:18:27.680 "superblock": true, 00:18:27.680 "num_base_bdevs": 3, 00:18:27.680 "num_base_bdevs_discovered": 1, 00:18:27.680 "num_base_bdevs_operational": 3, 00:18:27.680 "base_bdevs_list": [ 00:18:27.680 { 00:18:27.680 "name": "BaseBdev1", 00:18:27.680 "uuid": "71585464-c820-4406-9dbf-c8e95df08e28", 00:18:27.680 "is_configured": true, 00:18:27.680 "data_offset": 2048, 00:18:27.680 "data_size": 63488 00:18:27.680 }, 00:18:27.680 { 00:18:27.680 "name": "BaseBdev2", 00:18:27.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.680 "is_configured": false, 00:18:27.680 "data_offset": 0, 00:18:27.680 "data_size": 0 00:18:27.680 }, 00:18:27.680 { 00:18:27.680 "name": "BaseBdev3", 00:18:27.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.680 "is_configured": false, 00:18:27.680 "data_offset": 0, 00:18:27.680 "data_size": 0 00:18:27.680 } 00:18:27.680 ] 00:18:27.680 }' 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.680 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.298 [2024-11-27 08:50:24.830457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:28.298 BaseBdev2 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.298 [ 00:18:28.298 { 00:18:28.298 "name": "BaseBdev2", 00:18:28.298 "aliases": [ 00:18:28.298 "c4f8eb6e-f515-4b13-9923-f9c85ee1ecda" 00:18:28.298 ], 00:18:28.298 "product_name": "Malloc disk", 00:18:28.298 "block_size": 512, 00:18:28.298 "num_blocks": 65536, 00:18:28.298 "uuid": "c4f8eb6e-f515-4b13-9923-f9c85ee1ecda", 00:18:28.298 "assigned_rate_limits": { 00:18:28.298 "rw_ios_per_sec": 0, 00:18:28.298 "rw_mbytes_per_sec": 0, 00:18:28.298 "r_mbytes_per_sec": 0, 00:18:28.298 "w_mbytes_per_sec": 0 00:18:28.298 }, 00:18:28.298 "claimed": true, 00:18:28.298 "claim_type": "exclusive_write", 00:18:28.298 "zoned": false, 00:18:28.298 "supported_io_types": { 00:18:28.298 "read": true, 00:18:28.298 "write": true, 00:18:28.298 "unmap": true, 00:18:28.298 "flush": true, 00:18:28.298 "reset": true, 00:18:28.298 "nvme_admin": false, 00:18:28.298 "nvme_io": false, 00:18:28.298 "nvme_io_md": false, 00:18:28.298 "write_zeroes": true, 00:18:28.298 "zcopy": true, 00:18:28.298 "get_zone_info": false, 00:18:28.298 "zone_management": false, 00:18:28.298 "zone_append": false, 00:18:28.298 "compare": false, 00:18:28.298 "compare_and_write": false, 00:18:28.298 "abort": true, 00:18:28.298 "seek_hole": false, 00:18:28.298 "seek_data": false, 00:18:28.298 "copy": true, 00:18:28.298 "nvme_iov_md": false 00:18:28.298 }, 00:18:28.298 "memory_domains": [ 00:18:28.298 { 00:18:28.298 "dma_device_id": "system", 00:18:28.298 "dma_device_type": 1 00:18:28.298 }, 00:18:28.298 { 00:18:28.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.298 "dma_device_type": 2 00:18:28.298 } 00:18:28.298 ], 00:18:28.298 "driver_specific": {} 00:18:28.298 } 00:18:28.298 ] 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:18:28.298 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.299 "name": "Existed_Raid", 00:18:28.299 "uuid": "776811c5-dc13-4ca8-a457-7c5578978217", 00:18:28.299 "strip_size_kb": 64, 00:18:28.299 "state": "configuring", 00:18:28.299 "raid_level": "raid5f", 00:18:28.299 "superblock": true, 00:18:28.299 "num_base_bdevs": 3, 00:18:28.299 "num_base_bdevs_discovered": 2, 00:18:28.299 "num_base_bdevs_operational": 3, 00:18:28.299 "base_bdevs_list": [ 00:18:28.299 { 00:18:28.299 "name": "BaseBdev1", 00:18:28.299 "uuid": "71585464-c820-4406-9dbf-c8e95df08e28", 00:18:28.299 "is_configured": true, 00:18:28.299 "data_offset": 2048, 00:18:28.299 "data_size": 63488 00:18:28.299 }, 00:18:28.299 { 00:18:28.299 "name": "BaseBdev2", 00:18:28.299 "uuid": "c4f8eb6e-f515-4b13-9923-f9c85ee1ecda", 00:18:28.299 "is_configured": true, 00:18:28.299 "data_offset": 2048, 00:18:28.299 "data_size": 63488 00:18:28.299 }, 00:18:28.299 { 00:18:28.299 "name": "BaseBdev3", 00:18:28.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.299 "is_configured": false, 00:18:28.299 "data_offset": 0, 00:18:28.299 "data_size": 0 00:18:28.299 } 00:18:28.299 ] 00:18:28.299 }' 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.299 08:50:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.865 [2024-11-27 08:50:25.436418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:28.865 [2024-11-27 08:50:25.437022] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:28.865 [2024-11-27 08:50:25.437068] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:28.865 BaseBdev3 00:18:28.865 [2024-11-27 08:50:25.437477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.865 [2024-11-27 08:50:25.442873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:28.865 [2024-11-27 08:50:25.442908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:28.865 [2024-11-27 08:50:25.443279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.865 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.865 [ 00:18:28.865 { 00:18:28.865 "name": "BaseBdev3", 00:18:28.865 "aliases": [ 00:18:28.865 "7e413af6-3d2b-48c6-beab-d47449fbea7c" 00:18:28.865 ], 00:18:28.865 "product_name": "Malloc disk", 00:18:28.865 "block_size": 512, 00:18:28.865 "num_blocks": 65536, 00:18:28.865 "uuid": "7e413af6-3d2b-48c6-beab-d47449fbea7c", 00:18:28.865 "assigned_rate_limits": { 00:18:28.865 "rw_ios_per_sec": 0, 00:18:28.865 "rw_mbytes_per_sec": 0, 00:18:28.865 "r_mbytes_per_sec": 0, 00:18:28.865 "w_mbytes_per_sec": 0 00:18:28.865 }, 00:18:28.865 "claimed": true, 00:18:28.865 "claim_type": "exclusive_write", 00:18:28.865 "zoned": false, 00:18:28.865 "supported_io_types": { 00:18:28.865 "read": true, 00:18:28.865 "write": true, 00:18:28.865 "unmap": true, 00:18:28.865 "flush": true, 00:18:28.865 "reset": true, 00:18:28.865 "nvme_admin": false, 00:18:28.865 "nvme_io": false, 00:18:28.865 "nvme_io_md": false, 00:18:28.865 "write_zeroes": true, 00:18:28.865 "zcopy": true, 00:18:28.865 "get_zone_info": false, 00:18:28.865 "zone_management": false, 00:18:28.865 "zone_append": false, 00:18:28.865 "compare": false, 00:18:28.865 "compare_and_write": false, 00:18:28.865 "abort": true, 00:18:28.865 "seek_hole": false, 00:18:28.865 "seek_data": false, 00:18:28.865 "copy": true, 00:18:28.865 "nvme_iov_md": false 00:18:28.865 }, 00:18:28.866 "memory_domains": [ 00:18:28.866 { 00:18:28.866 "dma_device_id": "system", 00:18:28.866 "dma_device_type": 1 00:18:28.866 }, 00:18:28.866 { 00:18:28.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.866 "dma_device_type": 2 00:18:28.866 } 00:18:28.866 ], 00:18:28.866 "driver_specific": {} 00:18:28.866 } 00:18:28.866 ] 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.866 "name": "Existed_Raid", 00:18:28.866 "uuid": "776811c5-dc13-4ca8-a457-7c5578978217", 00:18:28.866 "strip_size_kb": 64, 00:18:28.866 "state": "online", 00:18:28.866 "raid_level": "raid5f", 00:18:28.866 "superblock": true, 00:18:28.866 "num_base_bdevs": 3, 00:18:28.866 "num_base_bdevs_discovered": 3, 00:18:28.866 "num_base_bdevs_operational": 3, 00:18:28.866 "base_bdevs_list": [ 00:18:28.866 { 00:18:28.866 "name": "BaseBdev1", 00:18:28.866 "uuid": "71585464-c820-4406-9dbf-c8e95df08e28", 00:18:28.866 "is_configured": true, 00:18:28.866 "data_offset": 2048, 00:18:28.866 "data_size": 63488 00:18:28.866 }, 00:18:28.866 { 00:18:28.866 "name": "BaseBdev2", 00:18:28.866 "uuid": "c4f8eb6e-f515-4b13-9923-f9c85ee1ecda", 00:18:28.866 "is_configured": true, 00:18:28.866 "data_offset": 2048, 00:18:28.866 "data_size": 63488 00:18:28.866 }, 00:18:28.866 { 00:18:28.866 "name": "BaseBdev3", 00:18:28.866 "uuid": "7e413af6-3d2b-48c6-beab-d47449fbea7c", 00:18:28.866 "is_configured": true, 00:18:28.866 "data_offset": 2048, 00:18:28.866 "data_size": 63488 00:18:28.866 } 00:18:28.866 ] 00:18:28.866 }' 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.866 08:50:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.433 [2024-11-27 08:50:26.017781] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:29.433 "name": "Existed_Raid", 00:18:29.433 "aliases": [ 00:18:29.433 "776811c5-dc13-4ca8-a457-7c5578978217" 00:18:29.433 ], 00:18:29.433 "product_name": "Raid Volume", 00:18:29.433 "block_size": 512, 00:18:29.433 "num_blocks": 126976, 00:18:29.433 "uuid": "776811c5-dc13-4ca8-a457-7c5578978217", 00:18:29.433 "assigned_rate_limits": { 00:18:29.433 "rw_ios_per_sec": 0, 00:18:29.433 "rw_mbytes_per_sec": 0, 00:18:29.433 "r_mbytes_per_sec": 0, 00:18:29.433 "w_mbytes_per_sec": 0 00:18:29.433 }, 00:18:29.433 "claimed": false, 00:18:29.433 "zoned": false, 00:18:29.433 "supported_io_types": { 00:18:29.433 "read": true, 00:18:29.433 "write": true, 00:18:29.433 "unmap": false, 00:18:29.433 "flush": false, 00:18:29.433 "reset": true, 00:18:29.433 "nvme_admin": false, 00:18:29.433 "nvme_io": false, 00:18:29.433 "nvme_io_md": false, 00:18:29.433 "write_zeroes": true, 00:18:29.433 "zcopy": false, 00:18:29.433 "get_zone_info": false, 00:18:29.433 "zone_management": false, 00:18:29.433 "zone_append": false, 00:18:29.433 "compare": false, 00:18:29.433 "compare_and_write": false, 00:18:29.433 "abort": false, 00:18:29.433 "seek_hole": false, 00:18:29.433 "seek_data": false, 00:18:29.433 "copy": false, 00:18:29.433 "nvme_iov_md": false 00:18:29.433 }, 00:18:29.433 "driver_specific": { 00:18:29.433 "raid": { 00:18:29.433 "uuid": "776811c5-dc13-4ca8-a457-7c5578978217", 00:18:29.433 "strip_size_kb": 64, 00:18:29.433 "state": "online", 00:18:29.433 "raid_level": "raid5f", 00:18:29.433 "superblock": true, 00:18:29.433 "num_base_bdevs": 3, 00:18:29.433 "num_base_bdevs_discovered": 3, 00:18:29.433 "num_base_bdevs_operational": 3, 00:18:29.433 "base_bdevs_list": [ 00:18:29.433 { 00:18:29.433 "name": "BaseBdev1", 00:18:29.433 "uuid": "71585464-c820-4406-9dbf-c8e95df08e28", 00:18:29.433 "is_configured": true, 00:18:29.433 "data_offset": 2048, 00:18:29.433 "data_size": 63488 00:18:29.433 }, 00:18:29.433 { 00:18:29.433 "name": "BaseBdev2", 00:18:29.433 "uuid": "c4f8eb6e-f515-4b13-9923-f9c85ee1ecda", 00:18:29.433 "is_configured": true, 00:18:29.433 "data_offset": 2048, 00:18:29.433 "data_size": 63488 00:18:29.433 }, 00:18:29.433 { 00:18:29.433 "name": "BaseBdev3", 00:18:29.433 "uuid": "7e413af6-3d2b-48c6-beab-d47449fbea7c", 00:18:29.433 "is_configured": true, 00:18:29.433 "data_offset": 2048, 00:18:29.433 "data_size": 63488 00:18:29.433 } 00:18:29.433 ] 00:18:29.433 } 00:18:29.433 } 00:18:29.433 }' 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:29.433 BaseBdev2 00:18:29.433 BaseBdev3' 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.433 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.692 [2024-11-27 08:50:26.329689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.692 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.951 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.951 "name": "Existed_Raid", 00:18:29.951 "uuid": "776811c5-dc13-4ca8-a457-7c5578978217", 00:18:29.951 "strip_size_kb": 64, 00:18:29.951 "state": "online", 00:18:29.951 "raid_level": "raid5f", 00:18:29.951 "superblock": true, 00:18:29.951 "num_base_bdevs": 3, 00:18:29.951 "num_base_bdevs_discovered": 2, 00:18:29.951 "num_base_bdevs_operational": 2, 00:18:29.951 "base_bdevs_list": [ 00:18:29.951 { 00:18:29.951 "name": null, 00:18:29.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.951 "is_configured": false, 00:18:29.951 "data_offset": 0, 00:18:29.951 "data_size": 63488 00:18:29.951 }, 00:18:29.951 { 00:18:29.951 "name": "BaseBdev2", 00:18:29.951 "uuid": "c4f8eb6e-f515-4b13-9923-f9c85ee1ecda", 00:18:29.951 "is_configured": true, 00:18:29.951 "data_offset": 2048, 00:18:29.951 "data_size": 63488 00:18:29.951 }, 00:18:29.951 { 00:18:29.951 "name": "BaseBdev3", 00:18:29.951 "uuid": "7e413af6-3d2b-48c6-beab-d47449fbea7c", 00:18:29.951 "is_configured": true, 00:18:29.951 "data_offset": 2048, 00:18:29.951 "data_size": 63488 00:18:29.951 } 00:18:29.951 ] 00:18:29.951 }' 00:18:29.951 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.951 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.209 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:30.209 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:30.209 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.209 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.209 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.209 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:30.209 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.209 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:30.209 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:30.209 08:50:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:30.209 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.209 08:50:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.209 [2024-11-27 08:50:26.920450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:30.209 [2024-11-27 08:50:26.920698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.468 [2024-11-27 08:50:27.011724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.468 [2024-11-27 08:50:27.071778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:30.468 [2024-11-27 08:50:27.071857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.468 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.727 BaseBdev2 00:18:30.727 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.727 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:30.727 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:18:30.727 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:18:30.727 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:18:30.727 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:18:30.727 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:18:30.727 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:18:30.727 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.727 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.727 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.727 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:30.727 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.727 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.727 [ 00:18:30.727 { 00:18:30.727 "name": "BaseBdev2", 00:18:30.727 "aliases": [ 00:18:30.727 "faa04e8b-2cdc-4fd2-8f1c-adc2d6b4203c" 00:18:30.727 ], 00:18:30.727 "product_name": "Malloc disk", 00:18:30.727 "block_size": 512, 00:18:30.727 "num_blocks": 65536, 00:18:30.727 "uuid": "faa04e8b-2cdc-4fd2-8f1c-adc2d6b4203c", 00:18:30.727 "assigned_rate_limits": { 00:18:30.727 "rw_ios_per_sec": 0, 00:18:30.727 "rw_mbytes_per_sec": 0, 00:18:30.727 "r_mbytes_per_sec": 0, 00:18:30.727 "w_mbytes_per_sec": 0 00:18:30.727 }, 00:18:30.727 "claimed": false, 00:18:30.727 "zoned": false, 00:18:30.727 "supported_io_types": { 00:18:30.727 "read": true, 00:18:30.727 "write": true, 00:18:30.728 "unmap": true, 00:18:30.728 "flush": true, 00:18:30.728 "reset": true, 00:18:30.728 "nvme_admin": false, 00:18:30.728 "nvme_io": false, 00:18:30.728 "nvme_io_md": false, 00:18:30.728 "write_zeroes": true, 00:18:30.728 "zcopy": true, 00:18:30.728 "get_zone_info": false, 00:18:30.728 "zone_management": false, 00:18:30.728 "zone_append": false, 00:18:30.728 "compare": false, 00:18:30.728 "compare_and_write": false, 00:18:30.728 "abort": true, 00:18:30.728 "seek_hole": false, 00:18:30.728 "seek_data": false, 00:18:30.728 "copy": true, 00:18:30.728 "nvme_iov_md": false 00:18:30.728 }, 00:18:30.728 "memory_domains": [ 00:18:30.728 { 00:18:30.728 "dma_device_id": "system", 00:18:30.728 "dma_device_type": 1 00:18:30.728 }, 00:18:30.728 { 00:18:30.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.728 "dma_device_type": 2 00:18:30.728 } 00:18:30.728 ], 00:18:30.728 "driver_specific": {} 00:18:30.728 } 00:18:30.728 ] 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.728 BaseBdev3 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.728 [ 00:18:30.728 { 00:18:30.728 "name": "BaseBdev3", 00:18:30.728 "aliases": [ 00:18:30.728 "2d6e19ca-03ee-4174-a225-316721a7cd5b" 00:18:30.728 ], 00:18:30.728 "product_name": "Malloc disk", 00:18:30.728 "block_size": 512, 00:18:30.728 "num_blocks": 65536, 00:18:30.728 "uuid": "2d6e19ca-03ee-4174-a225-316721a7cd5b", 00:18:30.728 "assigned_rate_limits": { 00:18:30.728 "rw_ios_per_sec": 0, 00:18:30.728 "rw_mbytes_per_sec": 0, 00:18:30.728 "r_mbytes_per_sec": 0, 00:18:30.728 "w_mbytes_per_sec": 0 00:18:30.728 }, 00:18:30.728 "claimed": false, 00:18:30.728 "zoned": false, 00:18:30.728 "supported_io_types": { 00:18:30.728 "read": true, 00:18:30.728 "write": true, 00:18:30.728 "unmap": true, 00:18:30.728 "flush": true, 00:18:30.728 "reset": true, 00:18:30.728 "nvme_admin": false, 00:18:30.728 "nvme_io": false, 00:18:30.728 "nvme_io_md": false, 00:18:30.728 "write_zeroes": true, 00:18:30.728 "zcopy": true, 00:18:30.728 "get_zone_info": false, 00:18:30.728 "zone_management": false, 00:18:30.728 "zone_append": false, 00:18:30.728 "compare": false, 00:18:30.728 "compare_and_write": false, 00:18:30.728 "abort": true, 00:18:30.728 "seek_hole": false, 00:18:30.728 "seek_data": false, 00:18:30.728 "copy": true, 00:18:30.728 "nvme_iov_md": false 00:18:30.728 }, 00:18:30.728 "memory_domains": [ 00:18:30.728 { 00:18:30.728 "dma_device_id": "system", 00:18:30.728 "dma_device_type": 1 00:18:30.728 }, 00:18:30.728 { 00:18:30.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.728 "dma_device_type": 2 00:18:30.728 } 00:18:30.728 ], 00:18:30.728 "driver_specific": {} 00:18:30.728 } 00:18:30.728 ] 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.728 [2024-11-27 08:50:27.386854] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:30.728 [2024-11-27 08:50:27.387059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:30.728 [2024-11-27 08:50:27.387204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:30.728 [2024-11-27 08:50:27.390000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.728 "name": "Existed_Raid", 00:18:30.728 "uuid": "d79e9d7e-bb43-4e77-a559-2946cb377477", 00:18:30.728 "strip_size_kb": 64, 00:18:30.728 "state": "configuring", 00:18:30.728 "raid_level": "raid5f", 00:18:30.728 "superblock": true, 00:18:30.728 "num_base_bdevs": 3, 00:18:30.728 "num_base_bdevs_discovered": 2, 00:18:30.728 "num_base_bdevs_operational": 3, 00:18:30.728 "base_bdevs_list": [ 00:18:30.728 { 00:18:30.728 "name": "BaseBdev1", 00:18:30.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.728 "is_configured": false, 00:18:30.728 "data_offset": 0, 00:18:30.728 "data_size": 0 00:18:30.728 }, 00:18:30.728 { 00:18:30.728 "name": "BaseBdev2", 00:18:30.728 "uuid": "faa04e8b-2cdc-4fd2-8f1c-adc2d6b4203c", 00:18:30.728 "is_configured": true, 00:18:30.728 "data_offset": 2048, 00:18:30.728 "data_size": 63488 00:18:30.728 }, 00:18:30.728 { 00:18:30.728 "name": "BaseBdev3", 00:18:30.728 "uuid": "2d6e19ca-03ee-4174-a225-316721a7cd5b", 00:18:30.728 "is_configured": true, 00:18:30.728 "data_offset": 2048, 00:18:30.728 "data_size": 63488 00:18:30.728 } 00:18:30.728 ] 00:18:30.728 }' 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.728 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.295 [2024-11-27 08:50:27.923022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.295 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.295 "name": "Existed_Raid", 00:18:31.295 "uuid": "d79e9d7e-bb43-4e77-a559-2946cb377477", 00:18:31.295 "strip_size_kb": 64, 00:18:31.295 "state": "configuring", 00:18:31.295 "raid_level": "raid5f", 00:18:31.295 "superblock": true, 00:18:31.295 "num_base_bdevs": 3, 00:18:31.295 "num_base_bdevs_discovered": 1, 00:18:31.295 "num_base_bdevs_operational": 3, 00:18:31.296 "base_bdevs_list": [ 00:18:31.296 { 00:18:31.296 "name": "BaseBdev1", 00:18:31.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.296 "is_configured": false, 00:18:31.296 "data_offset": 0, 00:18:31.296 "data_size": 0 00:18:31.296 }, 00:18:31.296 { 00:18:31.296 "name": null, 00:18:31.296 "uuid": "faa04e8b-2cdc-4fd2-8f1c-adc2d6b4203c", 00:18:31.296 "is_configured": false, 00:18:31.296 "data_offset": 0, 00:18:31.296 "data_size": 63488 00:18:31.296 }, 00:18:31.296 { 00:18:31.296 "name": "BaseBdev3", 00:18:31.296 "uuid": "2d6e19ca-03ee-4174-a225-316721a7cd5b", 00:18:31.296 "is_configured": true, 00:18:31.296 "data_offset": 2048, 00:18:31.296 "data_size": 63488 00:18:31.296 } 00:18:31.296 ] 00:18:31.296 }' 00:18:31.296 08:50:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.296 08:50:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.862 [2024-11-27 08:50:28.548938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.862 BaseBdev1 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.862 [ 00:18:31.862 { 00:18:31.862 "name": "BaseBdev1", 00:18:31.862 "aliases": [ 00:18:31.862 "2a746eef-da7b-484e-a837-ec8dad8d5dd4" 00:18:31.862 ], 00:18:31.862 "product_name": "Malloc disk", 00:18:31.862 "block_size": 512, 00:18:31.862 "num_blocks": 65536, 00:18:31.862 "uuid": "2a746eef-da7b-484e-a837-ec8dad8d5dd4", 00:18:31.862 "assigned_rate_limits": { 00:18:31.862 "rw_ios_per_sec": 0, 00:18:31.862 "rw_mbytes_per_sec": 0, 00:18:31.862 "r_mbytes_per_sec": 0, 00:18:31.862 "w_mbytes_per_sec": 0 00:18:31.862 }, 00:18:31.862 "claimed": true, 00:18:31.862 "claim_type": "exclusive_write", 00:18:31.862 "zoned": false, 00:18:31.862 "supported_io_types": { 00:18:31.862 "read": true, 00:18:31.862 "write": true, 00:18:31.862 "unmap": true, 00:18:31.862 "flush": true, 00:18:31.862 "reset": true, 00:18:31.862 "nvme_admin": false, 00:18:31.862 "nvme_io": false, 00:18:31.862 "nvme_io_md": false, 00:18:31.862 "write_zeroes": true, 00:18:31.862 "zcopy": true, 00:18:31.862 "get_zone_info": false, 00:18:31.862 "zone_management": false, 00:18:31.862 "zone_append": false, 00:18:31.862 "compare": false, 00:18:31.862 "compare_and_write": false, 00:18:31.862 "abort": true, 00:18:31.862 "seek_hole": false, 00:18:31.862 "seek_data": false, 00:18:31.862 "copy": true, 00:18:31.862 "nvme_iov_md": false 00:18:31.862 }, 00:18:31.862 "memory_domains": [ 00:18:31.862 { 00:18:31.862 "dma_device_id": "system", 00:18:31.862 "dma_device_type": 1 00:18:31.862 }, 00:18:31.862 { 00:18:31.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.862 "dma_device_type": 2 00:18:31.862 } 00:18:31.862 ], 00:18:31.862 "driver_specific": {} 00:18:31.862 } 00:18:31.862 ] 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.862 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.863 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.863 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.120 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.120 "name": "Existed_Raid", 00:18:32.120 "uuid": "d79e9d7e-bb43-4e77-a559-2946cb377477", 00:18:32.120 "strip_size_kb": 64, 00:18:32.120 "state": "configuring", 00:18:32.120 "raid_level": "raid5f", 00:18:32.120 "superblock": true, 00:18:32.120 "num_base_bdevs": 3, 00:18:32.120 "num_base_bdevs_discovered": 2, 00:18:32.120 "num_base_bdevs_operational": 3, 00:18:32.120 "base_bdevs_list": [ 00:18:32.120 { 00:18:32.120 "name": "BaseBdev1", 00:18:32.120 "uuid": "2a746eef-da7b-484e-a837-ec8dad8d5dd4", 00:18:32.120 "is_configured": true, 00:18:32.121 "data_offset": 2048, 00:18:32.121 "data_size": 63488 00:18:32.121 }, 00:18:32.121 { 00:18:32.121 "name": null, 00:18:32.121 "uuid": "faa04e8b-2cdc-4fd2-8f1c-adc2d6b4203c", 00:18:32.121 "is_configured": false, 00:18:32.121 "data_offset": 0, 00:18:32.121 "data_size": 63488 00:18:32.121 }, 00:18:32.121 { 00:18:32.121 "name": "BaseBdev3", 00:18:32.121 "uuid": "2d6e19ca-03ee-4174-a225-316721a7cd5b", 00:18:32.121 "is_configured": true, 00:18:32.121 "data_offset": 2048, 00:18:32.121 "data_size": 63488 00:18:32.121 } 00:18:32.121 ] 00:18:32.121 }' 00:18:32.121 08:50:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.121 08:50:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.397 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:32.397 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.397 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.397 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.397 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.692 [2024-11-27 08:50:29.165178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.692 "name": "Existed_Raid", 00:18:32.692 "uuid": "d79e9d7e-bb43-4e77-a559-2946cb377477", 00:18:32.692 "strip_size_kb": 64, 00:18:32.692 "state": "configuring", 00:18:32.692 "raid_level": "raid5f", 00:18:32.692 "superblock": true, 00:18:32.692 "num_base_bdevs": 3, 00:18:32.692 "num_base_bdevs_discovered": 1, 00:18:32.692 "num_base_bdevs_operational": 3, 00:18:32.692 "base_bdevs_list": [ 00:18:32.692 { 00:18:32.692 "name": "BaseBdev1", 00:18:32.692 "uuid": "2a746eef-da7b-484e-a837-ec8dad8d5dd4", 00:18:32.692 "is_configured": true, 00:18:32.692 "data_offset": 2048, 00:18:32.692 "data_size": 63488 00:18:32.692 }, 00:18:32.692 { 00:18:32.692 "name": null, 00:18:32.692 "uuid": "faa04e8b-2cdc-4fd2-8f1c-adc2d6b4203c", 00:18:32.692 "is_configured": false, 00:18:32.692 "data_offset": 0, 00:18:32.692 "data_size": 63488 00:18:32.692 }, 00:18:32.692 { 00:18:32.692 "name": null, 00:18:32.692 "uuid": "2d6e19ca-03ee-4174-a225-316721a7cd5b", 00:18:32.692 "is_configured": false, 00:18:32.692 "data_offset": 0, 00:18:32.692 "data_size": 63488 00:18:32.692 } 00:18:32.692 ] 00:18:32.692 }' 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.692 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.259 [2024-11-27 08:50:29.773428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.259 "name": "Existed_Raid", 00:18:33.259 "uuid": "d79e9d7e-bb43-4e77-a559-2946cb377477", 00:18:33.259 "strip_size_kb": 64, 00:18:33.259 "state": "configuring", 00:18:33.259 "raid_level": "raid5f", 00:18:33.259 "superblock": true, 00:18:33.259 "num_base_bdevs": 3, 00:18:33.259 "num_base_bdevs_discovered": 2, 00:18:33.259 "num_base_bdevs_operational": 3, 00:18:33.259 "base_bdevs_list": [ 00:18:33.259 { 00:18:33.259 "name": "BaseBdev1", 00:18:33.259 "uuid": "2a746eef-da7b-484e-a837-ec8dad8d5dd4", 00:18:33.259 "is_configured": true, 00:18:33.259 "data_offset": 2048, 00:18:33.259 "data_size": 63488 00:18:33.259 }, 00:18:33.259 { 00:18:33.259 "name": null, 00:18:33.259 "uuid": "faa04e8b-2cdc-4fd2-8f1c-adc2d6b4203c", 00:18:33.259 "is_configured": false, 00:18:33.259 "data_offset": 0, 00:18:33.259 "data_size": 63488 00:18:33.259 }, 00:18:33.259 { 00:18:33.259 "name": "BaseBdev3", 00:18:33.259 "uuid": "2d6e19ca-03ee-4174-a225-316721a7cd5b", 00:18:33.259 "is_configured": true, 00:18:33.259 "data_offset": 2048, 00:18:33.259 "data_size": 63488 00:18:33.259 } 00:18:33.259 ] 00:18:33.259 }' 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.259 08:50:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.516 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.516 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.516 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.516 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.775 [2024-11-27 08:50:30.317572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.775 "name": "Existed_Raid", 00:18:33.775 "uuid": "d79e9d7e-bb43-4e77-a559-2946cb377477", 00:18:33.775 "strip_size_kb": 64, 00:18:33.775 "state": "configuring", 00:18:33.775 "raid_level": "raid5f", 00:18:33.775 "superblock": true, 00:18:33.775 "num_base_bdevs": 3, 00:18:33.775 "num_base_bdevs_discovered": 1, 00:18:33.775 "num_base_bdevs_operational": 3, 00:18:33.775 "base_bdevs_list": [ 00:18:33.775 { 00:18:33.775 "name": null, 00:18:33.775 "uuid": "2a746eef-da7b-484e-a837-ec8dad8d5dd4", 00:18:33.775 "is_configured": false, 00:18:33.775 "data_offset": 0, 00:18:33.775 "data_size": 63488 00:18:33.775 }, 00:18:33.775 { 00:18:33.775 "name": null, 00:18:33.775 "uuid": "faa04e8b-2cdc-4fd2-8f1c-adc2d6b4203c", 00:18:33.775 "is_configured": false, 00:18:33.775 "data_offset": 0, 00:18:33.775 "data_size": 63488 00:18:33.775 }, 00:18:33.775 { 00:18:33.775 "name": "BaseBdev3", 00:18:33.775 "uuid": "2d6e19ca-03ee-4174-a225-316721a7cd5b", 00:18:33.775 "is_configured": true, 00:18:33.775 "data_offset": 2048, 00:18:33.775 "data_size": 63488 00:18:33.775 } 00:18:33.775 ] 00:18:33.775 }' 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.775 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.339 [2024-11-27 08:50:30.939115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.339 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.340 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.340 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.340 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.340 "name": "Existed_Raid", 00:18:34.340 "uuid": "d79e9d7e-bb43-4e77-a559-2946cb377477", 00:18:34.340 "strip_size_kb": 64, 00:18:34.340 "state": "configuring", 00:18:34.340 "raid_level": "raid5f", 00:18:34.340 "superblock": true, 00:18:34.340 "num_base_bdevs": 3, 00:18:34.340 "num_base_bdevs_discovered": 2, 00:18:34.340 "num_base_bdevs_operational": 3, 00:18:34.340 "base_bdevs_list": [ 00:18:34.340 { 00:18:34.340 "name": null, 00:18:34.340 "uuid": "2a746eef-da7b-484e-a837-ec8dad8d5dd4", 00:18:34.340 "is_configured": false, 00:18:34.340 "data_offset": 0, 00:18:34.340 "data_size": 63488 00:18:34.340 }, 00:18:34.340 { 00:18:34.340 "name": "BaseBdev2", 00:18:34.340 "uuid": "faa04e8b-2cdc-4fd2-8f1c-adc2d6b4203c", 00:18:34.340 "is_configured": true, 00:18:34.340 "data_offset": 2048, 00:18:34.340 "data_size": 63488 00:18:34.340 }, 00:18:34.340 { 00:18:34.340 "name": "BaseBdev3", 00:18:34.340 "uuid": "2d6e19ca-03ee-4174-a225-316721a7cd5b", 00:18:34.340 "is_configured": true, 00:18:34.340 "data_offset": 2048, 00:18:34.340 "data_size": 63488 00:18:34.340 } 00:18:34.340 ] 00:18:34.340 }' 00:18:34.340 08:50:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.340 08:50:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2a746eef-da7b-484e-a837-ec8dad8d5dd4 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.906 [2024-11-27 08:50:31.577491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:34.906 [2024-11-27 08:50:31.577859] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:34.906 [2024-11-27 08:50:31.577900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:34.906 NewBaseBdev 00:18:34.906 [2024-11-27 08:50:31.578234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.906 [2024-11-27 08:50:31.583379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:34.906 [2024-11-27 08:50:31.583553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:34.906 [2024-11-27 08:50:31.584067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.906 [ 00:18:34.906 { 00:18:34.906 "name": "NewBaseBdev", 00:18:34.906 "aliases": [ 00:18:34.906 "2a746eef-da7b-484e-a837-ec8dad8d5dd4" 00:18:34.906 ], 00:18:34.906 "product_name": "Malloc disk", 00:18:34.906 "block_size": 512, 00:18:34.906 "num_blocks": 65536, 00:18:34.906 "uuid": "2a746eef-da7b-484e-a837-ec8dad8d5dd4", 00:18:34.906 "assigned_rate_limits": { 00:18:34.906 "rw_ios_per_sec": 0, 00:18:34.906 "rw_mbytes_per_sec": 0, 00:18:34.906 "r_mbytes_per_sec": 0, 00:18:34.906 "w_mbytes_per_sec": 0 00:18:34.906 }, 00:18:34.906 "claimed": true, 00:18:34.906 "claim_type": "exclusive_write", 00:18:34.906 "zoned": false, 00:18:34.906 "supported_io_types": { 00:18:34.906 "read": true, 00:18:34.906 "write": true, 00:18:34.906 "unmap": true, 00:18:34.906 "flush": true, 00:18:34.906 "reset": true, 00:18:34.906 "nvme_admin": false, 00:18:34.906 "nvme_io": false, 00:18:34.906 "nvme_io_md": false, 00:18:34.906 "write_zeroes": true, 00:18:34.906 "zcopy": true, 00:18:34.906 "get_zone_info": false, 00:18:34.906 "zone_management": false, 00:18:34.906 "zone_append": false, 00:18:34.906 "compare": false, 00:18:34.906 "compare_and_write": false, 00:18:34.906 "abort": true, 00:18:34.906 "seek_hole": false, 00:18:34.906 "seek_data": false, 00:18:34.906 "copy": true, 00:18:34.906 "nvme_iov_md": false 00:18:34.906 }, 00:18:34.906 "memory_domains": [ 00:18:34.906 { 00:18:34.906 "dma_device_id": "system", 00:18:34.906 "dma_device_type": 1 00:18:34.906 }, 00:18:34.906 { 00:18:34.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.906 "dma_device_type": 2 00:18:34.906 } 00:18:34.906 ], 00:18:34.906 "driver_specific": {} 00:18:34.906 } 00:18:34.906 ] 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.906 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:34.907 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.907 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:34.907 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.907 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.907 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.907 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.907 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.907 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.907 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.907 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.907 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.165 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.165 "name": "Existed_Raid", 00:18:35.165 "uuid": "d79e9d7e-bb43-4e77-a559-2946cb377477", 00:18:35.165 "strip_size_kb": 64, 00:18:35.165 "state": "online", 00:18:35.165 "raid_level": "raid5f", 00:18:35.165 "superblock": true, 00:18:35.165 "num_base_bdevs": 3, 00:18:35.165 "num_base_bdevs_discovered": 3, 00:18:35.165 "num_base_bdevs_operational": 3, 00:18:35.165 "base_bdevs_list": [ 00:18:35.165 { 00:18:35.165 "name": "NewBaseBdev", 00:18:35.165 "uuid": "2a746eef-da7b-484e-a837-ec8dad8d5dd4", 00:18:35.165 "is_configured": true, 00:18:35.165 "data_offset": 2048, 00:18:35.165 "data_size": 63488 00:18:35.165 }, 00:18:35.165 { 00:18:35.165 "name": "BaseBdev2", 00:18:35.165 "uuid": "faa04e8b-2cdc-4fd2-8f1c-adc2d6b4203c", 00:18:35.165 "is_configured": true, 00:18:35.165 "data_offset": 2048, 00:18:35.165 "data_size": 63488 00:18:35.166 }, 00:18:35.166 { 00:18:35.166 "name": "BaseBdev3", 00:18:35.166 "uuid": "2d6e19ca-03ee-4174-a225-316721a7cd5b", 00:18:35.166 "is_configured": true, 00:18:35.166 "data_offset": 2048, 00:18:35.166 "data_size": 63488 00:18:35.166 } 00:18:35.166 ] 00:18:35.166 }' 00:18:35.166 08:50:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.166 08:50:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.424 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:35.424 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:35.424 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:35.424 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:35.424 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:35.424 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:35.424 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:35.424 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.424 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.424 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:35.424 [2024-11-27 08:50:32.150778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.424 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.683 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:35.683 "name": "Existed_Raid", 00:18:35.683 "aliases": [ 00:18:35.683 "d79e9d7e-bb43-4e77-a559-2946cb377477" 00:18:35.683 ], 00:18:35.683 "product_name": "Raid Volume", 00:18:35.683 "block_size": 512, 00:18:35.683 "num_blocks": 126976, 00:18:35.683 "uuid": "d79e9d7e-bb43-4e77-a559-2946cb377477", 00:18:35.683 "assigned_rate_limits": { 00:18:35.683 "rw_ios_per_sec": 0, 00:18:35.683 "rw_mbytes_per_sec": 0, 00:18:35.683 "r_mbytes_per_sec": 0, 00:18:35.683 "w_mbytes_per_sec": 0 00:18:35.683 }, 00:18:35.683 "claimed": false, 00:18:35.683 "zoned": false, 00:18:35.683 "supported_io_types": { 00:18:35.683 "read": true, 00:18:35.683 "write": true, 00:18:35.683 "unmap": false, 00:18:35.683 "flush": false, 00:18:35.683 "reset": true, 00:18:35.683 "nvme_admin": false, 00:18:35.683 "nvme_io": false, 00:18:35.683 "nvme_io_md": false, 00:18:35.683 "write_zeroes": true, 00:18:35.683 "zcopy": false, 00:18:35.683 "get_zone_info": false, 00:18:35.683 "zone_management": false, 00:18:35.683 "zone_append": false, 00:18:35.683 "compare": false, 00:18:35.683 "compare_and_write": false, 00:18:35.683 "abort": false, 00:18:35.683 "seek_hole": false, 00:18:35.683 "seek_data": false, 00:18:35.683 "copy": false, 00:18:35.683 "nvme_iov_md": false 00:18:35.683 }, 00:18:35.683 "driver_specific": { 00:18:35.683 "raid": { 00:18:35.683 "uuid": "d79e9d7e-bb43-4e77-a559-2946cb377477", 00:18:35.683 "strip_size_kb": 64, 00:18:35.683 "state": "online", 00:18:35.683 "raid_level": "raid5f", 00:18:35.683 "superblock": true, 00:18:35.683 "num_base_bdevs": 3, 00:18:35.683 "num_base_bdevs_discovered": 3, 00:18:35.683 "num_base_bdevs_operational": 3, 00:18:35.683 "base_bdevs_list": [ 00:18:35.683 { 00:18:35.683 "name": "NewBaseBdev", 00:18:35.683 "uuid": "2a746eef-da7b-484e-a837-ec8dad8d5dd4", 00:18:35.683 "is_configured": true, 00:18:35.683 "data_offset": 2048, 00:18:35.683 "data_size": 63488 00:18:35.683 }, 00:18:35.683 { 00:18:35.683 "name": "BaseBdev2", 00:18:35.683 "uuid": "faa04e8b-2cdc-4fd2-8f1c-adc2d6b4203c", 00:18:35.683 "is_configured": true, 00:18:35.683 "data_offset": 2048, 00:18:35.683 "data_size": 63488 00:18:35.683 }, 00:18:35.683 { 00:18:35.683 "name": "BaseBdev3", 00:18:35.683 "uuid": "2d6e19ca-03ee-4174-a225-316721a7cd5b", 00:18:35.683 "is_configured": true, 00:18:35.683 "data_offset": 2048, 00:18:35.683 "data_size": 63488 00:18:35.683 } 00:18:35.683 ] 00:18:35.683 } 00:18:35.683 } 00:18:35.683 }' 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:35.684 BaseBdev2 00:18:35.684 BaseBdev3' 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.684 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.942 [2024-11-27 08:50:32.490655] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:35.942 [2024-11-27 08:50:32.490699] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:35.942 [2024-11-27 08:50:32.490830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:35.942 [2024-11-27 08:50:32.491237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:35.942 [2024-11-27 08:50:32.491262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81002 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' -z 81002 ']' 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # kill -0 81002 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # uname 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 81002 00:18:35.942 killing process with pid 81002 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # echo 'killing process with pid 81002' 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # kill 81002 00:18:35.942 [2024-11-27 08:50:32.532777] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:35.942 08:50:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@975 -- # wait 81002 00:18:36.200 [2024-11-27 08:50:32.830612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:37.573 08:50:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:37.573 00:18:37.573 real 0m11.895s 00:18:37.573 user 0m19.456s 00:18:37.573 sys 0m1.800s 00:18:37.573 08:50:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # xtrace_disable 00:18:37.574 08:50:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.574 ************************************ 00:18:37.574 END TEST raid5f_state_function_test_sb 00:18:37.574 ************************************ 00:18:37.574 08:50:33 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:18:37.574 08:50:33 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:18:37.574 08:50:33 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:18:37.574 08:50:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:37.574 ************************************ 00:18:37.574 START TEST raid5f_superblock_test 00:18:37.574 ************************************ 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # raid_superblock_test raid5f 3 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81632 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81632 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@832 -- # '[' -z 81632 ']' 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:37.574 08:50:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.574 [2024-11-27 08:50:34.127440] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:18:37.574 [2024-11-27 08:50:34.127922] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81632 ] 00:18:37.574 [2024-11-27 08:50:34.312109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.831 [2024-11-27 08:50:34.457082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.089 [2024-11-27 08:50:34.681914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.089 [2024-11-27 08:50:34.681971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@865 -- # return 0 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.656 malloc1 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.656 [2024-11-27 08:50:35.238654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:38.656 [2024-11-27 08:50:35.239376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.656 [2024-11-27 08:50:35.239536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:38.656 [2024-11-27 08:50:35.239566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.656 [2024-11-27 08:50:35.242541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.656 [2024-11-27 08:50:35.242589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:38.656 pt1 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.656 malloc2 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.656 [2024-11-27 08:50:35.290320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:38.656 [2024-11-27 08:50:35.290412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.656 [2024-11-27 08:50:35.290447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:38.656 [2024-11-27 08:50:35.290462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.656 [2024-11-27 08:50:35.293383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.656 [2024-11-27 08:50:35.293427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:38.656 pt2 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.656 malloc3 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.656 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.656 [2024-11-27 08:50:35.360163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:38.656 [2024-11-27 08:50:35.360392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.656 [2024-11-27 08:50:35.360439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:38.656 [2024-11-27 08:50:35.360457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.656 [2024-11-27 08:50:35.363399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.656 [2024-11-27 08:50:35.363563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:38.657 pt3 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.657 [2024-11-27 08:50:35.368428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:38.657 [2024-11-27 08:50:35.370987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:38.657 [2024-11-27 08:50:35.371075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:38.657 [2024-11-27 08:50:35.371308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:38.657 [2024-11-27 08:50:35.371477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:38.657 [2024-11-27 08:50:35.371838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:38.657 [2024-11-27 08:50:35.377156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:38.657 [2024-11-27 08:50:35.377293] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:38.657 [2024-11-27 08:50:35.377696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.657 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.916 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.916 "name": "raid_bdev1", 00:18:38.916 "uuid": "03bbfcc1-d5ca-4e4f-acef-082766b5d724", 00:18:38.916 "strip_size_kb": 64, 00:18:38.916 "state": "online", 00:18:38.916 "raid_level": "raid5f", 00:18:38.916 "superblock": true, 00:18:38.916 "num_base_bdevs": 3, 00:18:38.916 "num_base_bdevs_discovered": 3, 00:18:38.916 "num_base_bdevs_operational": 3, 00:18:38.916 "base_bdevs_list": [ 00:18:38.916 { 00:18:38.916 "name": "pt1", 00:18:38.916 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.916 "is_configured": true, 00:18:38.916 "data_offset": 2048, 00:18:38.916 "data_size": 63488 00:18:38.916 }, 00:18:38.916 { 00:18:38.916 "name": "pt2", 00:18:38.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.916 "is_configured": true, 00:18:38.916 "data_offset": 2048, 00:18:38.916 "data_size": 63488 00:18:38.916 }, 00:18:38.916 { 00:18:38.916 "name": "pt3", 00:18:38.916 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:38.916 "is_configured": true, 00:18:38.916 "data_offset": 2048, 00:18:38.916 "data_size": 63488 00:18:38.916 } 00:18:38.916 ] 00:18:38.916 }' 00:18:38.916 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.916 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.174 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:39.174 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:39.174 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:39.174 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:39.174 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:39.174 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:39.174 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.174 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:39.174 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.174 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.174 [2024-11-27 08:50:35.928294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.433 08:50:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.433 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:39.433 "name": "raid_bdev1", 00:18:39.433 "aliases": [ 00:18:39.433 "03bbfcc1-d5ca-4e4f-acef-082766b5d724" 00:18:39.433 ], 00:18:39.433 "product_name": "Raid Volume", 00:18:39.433 "block_size": 512, 00:18:39.433 "num_blocks": 126976, 00:18:39.433 "uuid": "03bbfcc1-d5ca-4e4f-acef-082766b5d724", 00:18:39.433 "assigned_rate_limits": { 00:18:39.433 "rw_ios_per_sec": 0, 00:18:39.433 "rw_mbytes_per_sec": 0, 00:18:39.434 "r_mbytes_per_sec": 0, 00:18:39.434 "w_mbytes_per_sec": 0 00:18:39.434 }, 00:18:39.434 "claimed": false, 00:18:39.434 "zoned": false, 00:18:39.434 "supported_io_types": { 00:18:39.434 "read": true, 00:18:39.434 "write": true, 00:18:39.434 "unmap": false, 00:18:39.434 "flush": false, 00:18:39.434 "reset": true, 00:18:39.434 "nvme_admin": false, 00:18:39.434 "nvme_io": false, 00:18:39.434 "nvme_io_md": false, 00:18:39.434 "write_zeroes": true, 00:18:39.434 "zcopy": false, 00:18:39.434 "get_zone_info": false, 00:18:39.434 "zone_management": false, 00:18:39.434 "zone_append": false, 00:18:39.434 "compare": false, 00:18:39.434 "compare_and_write": false, 00:18:39.434 "abort": false, 00:18:39.434 "seek_hole": false, 00:18:39.434 "seek_data": false, 00:18:39.434 "copy": false, 00:18:39.434 "nvme_iov_md": false 00:18:39.434 }, 00:18:39.434 "driver_specific": { 00:18:39.434 "raid": { 00:18:39.434 "uuid": "03bbfcc1-d5ca-4e4f-acef-082766b5d724", 00:18:39.434 "strip_size_kb": 64, 00:18:39.434 "state": "online", 00:18:39.434 "raid_level": "raid5f", 00:18:39.434 "superblock": true, 00:18:39.434 "num_base_bdevs": 3, 00:18:39.434 "num_base_bdevs_discovered": 3, 00:18:39.434 "num_base_bdevs_operational": 3, 00:18:39.434 "base_bdevs_list": [ 00:18:39.434 { 00:18:39.434 "name": "pt1", 00:18:39.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.434 "is_configured": true, 00:18:39.434 "data_offset": 2048, 00:18:39.434 "data_size": 63488 00:18:39.434 }, 00:18:39.434 { 00:18:39.434 "name": "pt2", 00:18:39.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.434 "is_configured": true, 00:18:39.434 "data_offset": 2048, 00:18:39.434 "data_size": 63488 00:18:39.434 }, 00:18:39.434 { 00:18:39.434 "name": "pt3", 00:18:39.434 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:39.434 "is_configured": true, 00:18:39.434 "data_offset": 2048, 00:18:39.434 "data_size": 63488 00:18:39.434 } 00:18:39.434 ] 00:18:39.434 } 00:18:39.434 } 00:18:39.434 }' 00:18:39.434 08:50:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:39.434 pt2 00:18:39.434 pt3' 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.434 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:39.694 [2024-11-27 08:50:36.284331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=03bbfcc1-d5ca-4e4f-acef-082766b5d724 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 03bbfcc1-d5ca-4e4f-acef-082766b5d724 ']' 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.694 [2024-11-27 08:50:36.336104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.694 [2024-11-27 08:50:36.336149] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.694 [2024-11-27 08:50:36.336269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.694 [2024-11-27 08:50:36.336399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.694 [2024-11-27 08:50:36.336417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:39.694 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.954 [2024-11-27 08:50:36.484226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:39.954 [2024-11-27 08:50:36.486965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:39.954 [2024-11-27 08:50:36.487178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:39.954 [2024-11-27 08:50:36.487274] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:39.954 [2024-11-27 08:50:36.487370] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:39.954 [2024-11-27 08:50:36.487407] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:39.954 [2024-11-27 08:50:36.487436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.954 [2024-11-27 08:50:36.487450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:39.954 request: 00:18:39.954 { 00:18:39.954 "name": "raid_bdev1", 00:18:39.954 "raid_level": "raid5f", 00:18:39.954 "base_bdevs": [ 00:18:39.954 "malloc1", 00:18:39.954 "malloc2", 00:18:39.954 "malloc3" 00:18:39.954 ], 00:18:39.954 "strip_size_kb": 64, 00:18:39.954 "superblock": false, 00:18:39.954 "method": "bdev_raid_create", 00:18:39.954 "req_id": 1 00:18:39.954 } 00:18:39.954 Got JSON-RPC error response 00:18:39.954 response: 00:18:39.954 { 00:18:39.954 "code": -17, 00:18:39.954 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:39.954 } 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.954 [2024-11-27 08:50:36.552232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:39.954 [2024-11-27 08:50:36.552490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.954 [2024-11-27 08:50:36.552571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:39.954 [2024-11-27 08:50:36.552688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.954 [2024-11-27 08:50:36.555940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.954 [2024-11-27 08:50:36.556100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:39.954 [2024-11-27 08:50:36.556378] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:39.954 [2024-11-27 08:50:36.556572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:39.954 pt1 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.954 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.954 "name": "raid_bdev1", 00:18:39.954 "uuid": "03bbfcc1-d5ca-4e4f-acef-082766b5d724", 00:18:39.954 "strip_size_kb": 64, 00:18:39.954 "state": "configuring", 00:18:39.954 "raid_level": "raid5f", 00:18:39.954 "superblock": true, 00:18:39.954 "num_base_bdevs": 3, 00:18:39.954 "num_base_bdevs_discovered": 1, 00:18:39.954 "num_base_bdevs_operational": 3, 00:18:39.954 "base_bdevs_list": [ 00:18:39.954 { 00:18:39.954 "name": "pt1", 00:18:39.954 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.954 "is_configured": true, 00:18:39.954 "data_offset": 2048, 00:18:39.954 "data_size": 63488 00:18:39.954 }, 00:18:39.954 { 00:18:39.954 "name": null, 00:18:39.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.955 "is_configured": false, 00:18:39.955 "data_offset": 2048, 00:18:39.955 "data_size": 63488 00:18:39.955 }, 00:18:39.955 { 00:18:39.955 "name": null, 00:18:39.955 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:39.955 "is_configured": false, 00:18:39.955 "data_offset": 2048, 00:18:39.955 "data_size": 63488 00:18:39.955 } 00:18:39.955 ] 00:18:39.955 }' 00:18:39.955 08:50:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.955 08:50:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.522 [2024-11-27 08:50:37.100661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:40.522 [2024-11-27 08:50:37.100755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.522 [2024-11-27 08:50:37.100794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:40.522 [2024-11-27 08:50:37.100810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.522 [2024-11-27 08:50:37.101478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.522 [2024-11-27 08:50:37.101527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:40.522 [2024-11-27 08:50:37.101660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:40.522 [2024-11-27 08:50:37.101695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:40.522 pt2 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.522 [2024-11-27 08:50:37.108626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.522 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.523 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.523 "name": "raid_bdev1", 00:18:40.523 "uuid": "03bbfcc1-d5ca-4e4f-acef-082766b5d724", 00:18:40.523 "strip_size_kb": 64, 00:18:40.523 "state": "configuring", 00:18:40.523 "raid_level": "raid5f", 00:18:40.523 "superblock": true, 00:18:40.523 "num_base_bdevs": 3, 00:18:40.523 "num_base_bdevs_discovered": 1, 00:18:40.523 "num_base_bdevs_operational": 3, 00:18:40.523 "base_bdevs_list": [ 00:18:40.523 { 00:18:40.523 "name": "pt1", 00:18:40.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:40.523 "is_configured": true, 00:18:40.523 "data_offset": 2048, 00:18:40.523 "data_size": 63488 00:18:40.523 }, 00:18:40.523 { 00:18:40.523 "name": null, 00:18:40.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.523 "is_configured": false, 00:18:40.523 "data_offset": 0, 00:18:40.523 "data_size": 63488 00:18:40.523 }, 00:18:40.523 { 00:18:40.523 "name": null, 00:18:40.523 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:40.523 "is_configured": false, 00:18:40.523 "data_offset": 2048, 00:18:40.523 "data_size": 63488 00:18:40.523 } 00:18:40.523 ] 00:18:40.523 }' 00:18:40.523 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.523 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.109 [2024-11-27 08:50:37.624775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:41.109 [2024-11-27 08:50:37.624878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.109 [2024-11-27 08:50:37.624909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:41.109 [2024-11-27 08:50:37.624927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.109 [2024-11-27 08:50:37.625619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.109 [2024-11-27 08:50:37.625651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:41.109 [2024-11-27 08:50:37.625768] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:41.109 [2024-11-27 08:50:37.625808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.109 pt2 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.109 [2024-11-27 08:50:37.632712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:41.109 [2024-11-27 08:50:37.632775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.109 [2024-11-27 08:50:37.632797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:41.109 [2024-11-27 08:50:37.632814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.109 [2024-11-27 08:50:37.633291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.109 [2024-11-27 08:50:37.633332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:41.109 [2024-11-27 08:50:37.633432] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:41.109 [2024-11-27 08:50:37.633472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:41.109 [2024-11-27 08:50:37.633637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:41.109 [2024-11-27 08:50:37.633659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:41.109 [2024-11-27 08:50:37.633994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:41.109 [2024-11-27 08:50:37.639029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:41.109 [2024-11-27 08:50:37.639055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:41.109 [2024-11-27 08:50:37.639290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.109 pt3 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.109 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.110 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.110 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.110 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.110 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.110 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.110 "name": "raid_bdev1", 00:18:41.110 "uuid": "03bbfcc1-d5ca-4e4f-acef-082766b5d724", 00:18:41.110 "strip_size_kb": 64, 00:18:41.110 "state": "online", 00:18:41.110 "raid_level": "raid5f", 00:18:41.110 "superblock": true, 00:18:41.110 "num_base_bdevs": 3, 00:18:41.110 "num_base_bdevs_discovered": 3, 00:18:41.110 "num_base_bdevs_operational": 3, 00:18:41.110 "base_bdevs_list": [ 00:18:41.110 { 00:18:41.110 "name": "pt1", 00:18:41.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:41.110 "is_configured": true, 00:18:41.110 "data_offset": 2048, 00:18:41.110 "data_size": 63488 00:18:41.110 }, 00:18:41.110 { 00:18:41.110 "name": "pt2", 00:18:41.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.110 "is_configured": true, 00:18:41.110 "data_offset": 2048, 00:18:41.110 "data_size": 63488 00:18:41.110 }, 00:18:41.110 { 00:18:41.110 "name": "pt3", 00:18:41.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:41.110 "is_configured": true, 00:18:41.110 "data_offset": 2048, 00:18:41.110 "data_size": 63488 00:18:41.110 } 00:18:41.110 ] 00:18:41.110 }' 00:18:41.110 08:50:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.110 08:50:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:41.676 [2024-11-27 08:50:38.161733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:41.676 "name": "raid_bdev1", 00:18:41.676 "aliases": [ 00:18:41.676 "03bbfcc1-d5ca-4e4f-acef-082766b5d724" 00:18:41.676 ], 00:18:41.676 "product_name": "Raid Volume", 00:18:41.676 "block_size": 512, 00:18:41.676 "num_blocks": 126976, 00:18:41.676 "uuid": "03bbfcc1-d5ca-4e4f-acef-082766b5d724", 00:18:41.676 "assigned_rate_limits": { 00:18:41.676 "rw_ios_per_sec": 0, 00:18:41.676 "rw_mbytes_per_sec": 0, 00:18:41.676 "r_mbytes_per_sec": 0, 00:18:41.676 "w_mbytes_per_sec": 0 00:18:41.676 }, 00:18:41.676 "claimed": false, 00:18:41.676 "zoned": false, 00:18:41.676 "supported_io_types": { 00:18:41.676 "read": true, 00:18:41.676 "write": true, 00:18:41.676 "unmap": false, 00:18:41.676 "flush": false, 00:18:41.676 "reset": true, 00:18:41.676 "nvme_admin": false, 00:18:41.676 "nvme_io": false, 00:18:41.676 "nvme_io_md": false, 00:18:41.676 "write_zeroes": true, 00:18:41.676 "zcopy": false, 00:18:41.676 "get_zone_info": false, 00:18:41.676 "zone_management": false, 00:18:41.676 "zone_append": false, 00:18:41.676 "compare": false, 00:18:41.676 "compare_and_write": false, 00:18:41.676 "abort": false, 00:18:41.676 "seek_hole": false, 00:18:41.676 "seek_data": false, 00:18:41.676 "copy": false, 00:18:41.676 "nvme_iov_md": false 00:18:41.676 }, 00:18:41.676 "driver_specific": { 00:18:41.676 "raid": { 00:18:41.676 "uuid": "03bbfcc1-d5ca-4e4f-acef-082766b5d724", 00:18:41.676 "strip_size_kb": 64, 00:18:41.676 "state": "online", 00:18:41.676 "raid_level": "raid5f", 00:18:41.676 "superblock": true, 00:18:41.676 "num_base_bdevs": 3, 00:18:41.676 "num_base_bdevs_discovered": 3, 00:18:41.676 "num_base_bdevs_operational": 3, 00:18:41.676 "base_bdevs_list": [ 00:18:41.676 { 00:18:41.676 "name": "pt1", 00:18:41.676 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:41.676 "is_configured": true, 00:18:41.676 "data_offset": 2048, 00:18:41.676 "data_size": 63488 00:18:41.676 }, 00:18:41.676 { 00:18:41.676 "name": "pt2", 00:18:41.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.676 "is_configured": true, 00:18:41.676 "data_offset": 2048, 00:18:41.676 "data_size": 63488 00:18:41.676 }, 00:18:41.676 { 00:18:41.676 "name": "pt3", 00:18:41.676 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:41.676 "is_configured": true, 00:18:41.676 "data_offset": 2048, 00:18:41.676 "data_size": 63488 00:18:41.676 } 00:18:41.676 ] 00:18:41.676 } 00:18:41.676 } 00:18:41.676 }' 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:41.676 pt2 00:18:41.676 pt3' 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.676 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.677 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.677 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.677 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.677 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.677 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:41.677 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.677 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.677 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.935 [2024-11-27 08:50:38.477763] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 03bbfcc1-d5ca-4e4f-acef-082766b5d724 '!=' 03bbfcc1-d5ca-4e4f-acef-082766b5d724 ']' 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.935 [2024-11-27 08:50:38.533630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.935 "name": "raid_bdev1", 00:18:41.935 "uuid": "03bbfcc1-d5ca-4e4f-acef-082766b5d724", 00:18:41.935 "strip_size_kb": 64, 00:18:41.935 "state": "online", 00:18:41.935 "raid_level": "raid5f", 00:18:41.935 "superblock": true, 00:18:41.935 "num_base_bdevs": 3, 00:18:41.935 "num_base_bdevs_discovered": 2, 00:18:41.935 "num_base_bdevs_operational": 2, 00:18:41.935 "base_bdevs_list": [ 00:18:41.935 { 00:18:41.935 "name": null, 00:18:41.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.935 "is_configured": false, 00:18:41.935 "data_offset": 0, 00:18:41.935 "data_size": 63488 00:18:41.935 }, 00:18:41.935 { 00:18:41.935 "name": "pt2", 00:18:41.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.935 "is_configured": true, 00:18:41.935 "data_offset": 2048, 00:18:41.935 "data_size": 63488 00:18:41.935 }, 00:18:41.935 { 00:18:41.935 "name": "pt3", 00:18:41.935 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:41.935 "is_configured": true, 00:18:41.935 "data_offset": 2048, 00:18:41.935 "data_size": 63488 00:18:41.935 } 00:18:41.935 ] 00:18:41.935 }' 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.935 08:50:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.501 [2024-11-27 08:50:39.053678] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:42.501 [2024-11-27 08:50:39.053719] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:42.501 [2024-11-27 08:50:39.053834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:42.501 [2024-11-27 08:50:39.053921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:42.501 [2024-11-27 08:50:39.053945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.501 [2024-11-27 08:50:39.141678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:42.501 [2024-11-27 08:50:39.141771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.501 [2024-11-27 08:50:39.141801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:42.501 [2024-11-27 08:50:39.141820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.501 [2024-11-27 08:50:39.145031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.501 [2024-11-27 08:50:39.145214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:42.501 [2024-11-27 08:50:39.145383] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:42.501 [2024-11-27 08:50:39.145456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:42.501 pt2 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:42.501 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.502 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.502 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:42.502 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.502 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.502 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.502 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.502 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.502 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.502 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.502 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.502 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.502 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.502 "name": "raid_bdev1", 00:18:42.502 "uuid": "03bbfcc1-d5ca-4e4f-acef-082766b5d724", 00:18:42.502 "strip_size_kb": 64, 00:18:42.502 "state": "configuring", 00:18:42.502 "raid_level": "raid5f", 00:18:42.502 "superblock": true, 00:18:42.502 "num_base_bdevs": 3, 00:18:42.502 "num_base_bdevs_discovered": 1, 00:18:42.502 "num_base_bdevs_operational": 2, 00:18:42.502 "base_bdevs_list": [ 00:18:42.502 { 00:18:42.502 "name": null, 00:18:42.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.502 "is_configured": false, 00:18:42.502 "data_offset": 2048, 00:18:42.502 "data_size": 63488 00:18:42.502 }, 00:18:42.502 { 00:18:42.502 "name": "pt2", 00:18:42.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.502 "is_configured": true, 00:18:42.502 "data_offset": 2048, 00:18:42.502 "data_size": 63488 00:18:42.502 }, 00:18:42.502 { 00:18:42.502 "name": null, 00:18:42.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:42.502 "is_configured": false, 00:18:42.502 "data_offset": 2048, 00:18:42.502 "data_size": 63488 00:18:42.502 } 00:18:42.502 ] 00:18:42.502 }' 00:18:42.502 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.502 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.067 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.068 [2024-11-27 08:50:39.661838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:43.068 [2024-11-27 08:50:39.661934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.068 [2024-11-27 08:50:39.661973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:43.068 [2024-11-27 08:50:39.661993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.068 [2024-11-27 08:50:39.662684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.068 [2024-11-27 08:50:39.662724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:43.068 [2024-11-27 08:50:39.662838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:43.068 [2024-11-27 08:50:39.662889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:43.068 [2024-11-27 08:50:39.663051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:43.068 [2024-11-27 08:50:39.663073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:43.068 [2024-11-27 08:50:39.663431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:43.068 [2024-11-27 08:50:39.668385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:43.068 [2024-11-27 08:50:39.668409] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:43.068 [2024-11-27 08:50:39.668775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.068 pt3 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.068 "name": "raid_bdev1", 00:18:43.068 "uuid": "03bbfcc1-d5ca-4e4f-acef-082766b5d724", 00:18:43.068 "strip_size_kb": 64, 00:18:43.068 "state": "online", 00:18:43.068 "raid_level": "raid5f", 00:18:43.068 "superblock": true, 00:18:43.068 "num_base_bdevs": 3, 00:18:43.068 "num_base_bdevs_discovered": 2, 00:18:43.068 "num_base_bdevs_operational": 2, 00:18:43.068 "base_bdevs_list": [ 00:18:43.068 { 00:18:43.068 "name": null, 00:18:43.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.068 "is_configured": false, 00:18:43.068 "data_offset": 2048, 00:18:43.068 "data_size": 63488 00:18:43.068 }, 00:18:43.068 { 00:18:43.068 "name": "pt2", 00:18:43.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.068 "is_configured": true, 00:18:43.068 "data_offset": 2048, 00:18:43.068 "data_size": 63488 00:18:43.068 }, 00:18:43.068 { 00:18:43.068 "name": "pt3", 00:18:43.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:43.068 "is_configured": true, 00:18:43.068 "data_offset": 2048, 00:18:43.068 "data_size": 63488 00:18:43.068 } 00:18:43.068 ] 00:18:43.068 }' 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.068 08:50:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.635 [2024-11-27 08:50:40.166871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.635 [2024-11-27 08:50:40.166917] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.635 [2024-11-27 08:50:40.167029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.635 [2024-11-27 08:50:40.167123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.635 [2024-11-27 08:50:40.167140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.635 [2024-11-27 08:50:40.242900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:43.635 [2024-11-27 08:50:40.243103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.635 [2024-11-27 08:50:40.243145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:43.635 [2024-11-27 08:50:40.243161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.635 [2024-11-27 08:50:40.246263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.635 [2024-11-27 08:50:40.246455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:43.635 [2024-11-27 08:50:40.246580] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:43.635 [2024-11-27 08:50:40.246645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:43.635 [2024-11-27 08:50:40.246829] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:43.635 [2024-11-27 08:50:40.246848] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.635 [2024-11-27 08:50:40.246873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:43.635 [2024-11-27 08:50:40.246947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:43.635 pt1 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.635 "name": "raid_bdev1", 00:18:43.635 "uuid": "03bbfcc1-d5ca-4e4f-acef-082766b5d724", 00:18:43.635 "strip_size_kb": 64, 00:18:43.635 "state": "configuring", 00:18:43.635 "raid_level": "raid5f", 00:18:43.635 "superblock": true, 00:18:43.635 "num_base_bdevs": 3, 00:18:43.635 "num_base_bdevs_discovered": 1, 00:18:43.635 "num_base_bdevs_operational": 2, 00:18:43.635 "base_bdevs_list": [ 00:18:43.635 { 00:18:43.635 "name": null, 00:18:43.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.635 "is_configured": false, 00:18:43.635 "data_offset": 2048, 00:18:43.635 "data_size": 63488 00:18:43.635 }, 00:18:43.635 { 00:18:43.635 "name": "pt2", 00:18:43.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.635 "is_configured": true, 00:18:43.635 "data_offset": 2048, 00:18:43.635 "data_size": 63488 00:18:43.635 }, 00:18:43.635 { 00:18:43.635 "name": null, 00:18:43.635 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:43.635 "is_configured": false, 00:18:43.635 "data_offset": 2048, 00:18:43.635 "data_size": 63488 00:18:43.635 } 00:18:43.635 ] 00:18:43.635 }' 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.635 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.203 [2024-11-27 08:50:40.823126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:44.203 [2024-11-27 08:50:40.823219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.203 [2024-11-27 08:50:40.823255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:44.203 [2024-11-27 08:50:40.823271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.203 [2024-11-27 08:50:40.823949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.203 [2024-11-27 08:50:40.823981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:44.203 [2024-11-27 08:50:40.824099] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:44.203 [2024-11-27 08:50:40.824135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:44.203 [2024-11-27 08:50:40.824301] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:44.203 [2024-11-27 08:50:40.824324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:44.203 [2024-11-27 08:50:40.824679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:44.203 [2024-11-27 08:50:40.829676] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:44.203 pt3 00:18:44.203 [2024-11-27 08:50:40.829847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:44.203 [2024-11-27 08:50:40.830178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.203 "name": "raid_bdev1", 00:18:44.203 "uuid": "03bbfcc1-d5ca-4e4f-acef-082766b5d724", 00:18:44.203 "strip_size_kb": 64, 00:18:44.203 "state": "online", 00:18:44.203 "raid_level": "raid5f", 00:18:44.203 "superblock": true, 00:18:44.203 "num_base_bdevs": 3, 00:18:44.203 "num_base_bdevs_discovered": 2, 00:18:44.203 "num_base_bdevs_operational": 2, 00:18:44.203 "base_bdevs_list": [ 00:18:44.203 { 00:18:44.203 "name": null, 00:18:44.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.203 "is_configured": false, 00:18:44.203 "data_offset": 2048, 00:18:44.203 "data_size": 63488 00:18:44.203 }, 00:18:44.203 { 00:18:44.203 "name": "pt2", 00:18:44.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.203 "is_configured": true, 00:18:44.203 "data_offset": 2048, 00:18:44.203 "data_size": 63488 00:18:44.203 }, 00:18:44.203 { 00:18:44.203 "name": "pt3", 00:18:44.203 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:44.203 "is_configured": true, 00:18:44.203 "data_offset": 2048, 00:18:44.203 "data_size": 63488 00:18:44.203 } 00:18:44.203 ] 00:18:44.203 }' 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.203 08:50:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:44.769 [2024-11-27 08:50:41.392734] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 03bbfcc1-d5ca-4e4f-acef-082766b5d724 '!=' 03bbfcc1-d5ca-4e4f-acef-082766b5d724 ']' 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81632 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # '[' -z 81632 ']' 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # kill -0 81632 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # uname 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 81632 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 81632' 00:18:44.769 killing process with pid 81632 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # kill 81632 00:18:44.769 [2024-11-27 08:50:41.474668] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:44.769 08:50:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@975 -- # wait 81632 00:18:44.769 [2024-11-27 08:50:41.474954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.769 [2024-11-27 08:50:41.475147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.769 [2024-11-27 08:50:41.475288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:45.027 [2024-11-27 08:50:41.764172] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:46.405 08:50:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:46.405 00:18:46.405 real 0m8.863s 00:18:46.405 user 0m14.348s 00:18:46.405 sys 0m1.331s 00:18:46.405 08:50:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:18:46.405 08:50:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.405 ************************************ 00:18:46.405 END TEST raid5f_superblock_test 00:18:46.405 ************************************ 00:18:46.405 08:50:42 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:46.405 08:50:42 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:18:46.405 08:50:42 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:18:46.405 08:50:42 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:18:46.405 08:50:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.405 ************************************ 00:18:46.405 START TEST raid5f_rebuild_test 00:18:46.405 ************************************ 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # raid_rebuild_test raid5f 3 false false true 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:46.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82087 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82087 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@832 -- # '[' -z 82087 ']' 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:46.405 08:50:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.405 [2024-11-27 08:50:43.029534] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:18:46.406 [2024-11-27 08:50:43.029865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82087 ] 00:18:46.406 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:46.406 Zero copy mechanism will not be used. 00:18:46.664 [2024-11-27 08:50:43.211017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.664 [2024-11-27 08:50:43.382231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.922 [2024-11-27 08:50:43.605144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.922 [2024-11-27 08:50:43.605477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # return 0 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.489 BaseBdev1_malloc 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.489 [2024-11-27 08:50:44.105577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:47.489 [2024-11-27 08:50:44.105671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.489 [2024-11-27 08:50:44.105712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:47.489 [2024-11-27 08:50:44.105734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.489 [2024-11-27 08:50:44.108784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.489 [2024-11-27 08:50:44.108836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:47.489 BaseBdev1 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.489 BaseBdev2_malloc 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.489 [2024-11-27 08:50:44.161241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:47.489 [2024-11-27 08:50:44.161365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.489 [2024-11-27 08:50:44.161405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:47.489 [2024-11-27 08:50:44.161429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.489 [2024-11-27 08:50:44.164611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.489 [2024-11-27 08:50:44.164661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:47.489 BaseBdev2 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.489 BaseBdev3_malloc 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.489 [2024-11-27 08:50:44.238528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:47.489 [2024-11-27 08:50:44.238807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.489 [2024-11-27 08:50:44.238862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:47.489 [2024-11-27 08:50:44.238886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.489 [2024-11-27 08:50:44.242135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.489 [2024-11-27 08:50:44.242320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:47.489 BaseBdev3 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.489 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.747 spare_malloc 00:18:47.747 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.748 spare_delay 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.748 [2024-11-27 08:50:44.311999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:47.748 [2024-11-27 08:50:44.312091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.748 [2024-11-27 08:50:44.312129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:47.748 [2024-11-27 08:50:44.312151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.748 [2024-11-27 08:50:44.315541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.748 [2024-11-27 08:50:44.315740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:47.748 spare 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.748 [2024-11-27 08:50:44.320172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.748 [2024-11-27 08:50:44.322836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:47.748 [2024-11-27 08:50:44.323076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:47.748 [2024-11-27 08:50:44.323228] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:47.748 [2024-11-27 08:50:44.323248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:47.748 [2024-11-27 08:50:44.323683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:47.748 [2024-11-27 08:50:44.328935] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:47.748 [2024-11-27 08:50:44.328969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:47.748 [2024-11-27 08:50:44.329297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.748 "name": "raid_bdev1", 00:18:47.748 "uuid": "9b44bab5-049a-4d67-8dda-851df3db570a", 00:18:47.748 "strip_size_kb": 64, 00:18:47.748 "state": "online", 00:18:47.748 "raid_level": "raid5f", 00:18:47.748 "superblock": false, 00:18:47.748 "num_base_bdevs": 3, 00:18:47.748 "num_base_bdevs_discovered": 3, 00:18:47.748 "num_base_bdevs_operational": 3, 00:18:47.748 "base_bdevs_list": [ 00:18:47.748 { 00:18:47.748 "name": "BaseBdev1", 00:18:47.748 "uuid": "a0147a92-027b-5c23-a250-16e671c2d6e3", 00:18:47.748 "is_configured": true, 00:18:47.748 "data_offset": 0, 00:18:47.748 "data_size": 65536 00:18:47.748 }, 00:18:47.748 { 00:18:47.748 "name": "BaseBdev2", 00:18:47.748 "uuid": "b77eab0c-58fd-5e96-a6dc-a3a78902654f", 00:18:47.748 "is_configured": true, 00:18:47.748 "data_offset": 0, 00:18:47.748 "data_size": 65536 00:18:47.748 }, 00:18:47.748 { 00:18:47.748 "name": "BaseBdev3", 00:18:47.748 "uuid": "a336bec1-1a1a-5f2b-85fe-d0c1450c35f4", 00:18:47.748 "is_configured": true, 00:18:47.748 "data_offset": 0, 00:18:47.748 "data_size": 65536 00:18:47.748 } 00:18:47.748 ] 00:18:47.748 }' 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.748 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.315 [2024-11-27 08:50:44.848069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:48.315 08:50:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:48.573 [2024-11-27 08:50:45.220155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:48.573 /dev/nbd0 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local i 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # break 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.573 1+0 records in 00:18:48.573 1+0 records out 00:18:48.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630954 s, 6.5 MB/s 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # size=4096 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # return 0 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:48.573 08:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:48.574 08:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:48.574 08:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:18:49.142 512+0 records in 00:18:49.142 512+0 records out 00:18:49.142 67108864 bytes (67 MB, 64 MiB) copied, 0.453735 s, 148 MB/s 00:18:49.142 08:50:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:49.142 08:50:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:49.142 08:50:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:49.142 08:50:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:49.142 08:50:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:49.142 08:50:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:49.142 08:50:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:49.401 [2024-11-27 08:50:46.018096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.401 [2024-11-27 08:50:46.036404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.401 "name": "raid_bdev1", 00:18:49.401 "uuid": "9b44bab5-049a-4d67-8dda-851df3db570a", 00:18:49.401 "strip_size_kb": 64, 00:18:49.401 "state": "online", 00:18:49.401 "raid_level": "raid5f", 00:18:49.401 "superblock": false, 00:18:49.401 "num_base_bdevs": 3, 00:18:49.401 "num_base_bdevs_discovered": 2, 00:18:49.401 "num_base_bdevs_operational": 2, 00:18:49.401 "base_bdevs_list": [ 00:18:49.401 { 00:18:49.401 "name": null, 00:18:49.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.401 "is_configured": false, 00:18:49.401 "data_offset": 0, 00:18:49.401 "data_size": 65536 00:18:49.401 }, 00:18:49.401 { 00:18:49.401 "name": "BaseBdev2", 00:18:49.401 "uuid": "b77eab0c-58fd-5e96-a6dc-a3a78902654f", 00:18:49.401 "is_configured": true, 00:18:49.401 "data_offset": 0, 00:18:49.401 "data_size": 65536 00:18:49.401 }, 00:18:49.401 { 00:18:49.401 "name": "BaseBdev3", 00:18:49.401 "uuid": "a336bec1-1a1a-5f2b-85fe-d0c1450c35f4", 00:18:49.401 "is_configured": true, 00:18:49.401 "data_offset": 0, 00:18:49.401 "data_size": 65536 00:18:49.401 } 00:18:49.401 ] 00:18:49.401 }' 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.401 08:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.969 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:49.969 08:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.969 08:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.969 [2024-11-27 08:50:46.580598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.969 [2024-11-27 08:50:46.596815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:18:49.969 08:50:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.969 08:50:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:49.969 [2024-11-27 08:50:46.604436] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.903 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.903 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.903 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.903 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.903 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.903 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.903 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.903 08:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.903 08:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.903 08:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.903 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.903 "name": "raid_bdev1", 00:18:50.903 "uuid": "9b44bab5-049a-4d67-8dda-851df3db570a", 00:18:50.903 "strip_size_kb": 64, 00:18:50.903 "state": "online", 00:18:50.903 "raid_level": "raid5f", 00:18:50.903 "superblock": false, 00:18:50.903 "num_base_bdevs": 3, 00:18:50.903 "num_base_bdevs_discovered": 3, 00:18:50.903 "num_base_bdevs_operational": 3, 00:18:50.903 "process": { 00:18:50.903 "type": "rebuild", 00:18:50.903 "target": "spare", 00:18:50.903 "progress": { 00:18:50.903 "blocks": 18432, 00:18:50.903 "percent": 14 00:18:50.903 } 00:18:50.903 }, 00:18:50.903 "base_bdevs_list": [ 00:18:50.903 { 00:18:50.903 "name": "spare", 00:18:50.903 "uuid": "e451e2cd-bb9d-5b23-9bf6-2969ccd118f0", 00:18:50.903 "is_configured": true, 00:18:50.903 "data_offset": 0, 00:18:50.903 "data_size": 65536 00:18:50.903 }, 00:18:50.903 { 00:18:50.903 "name": "BaseBdev2", 00:18:50.903 "uuid": "b77eab0c-58fd-5e96-a6dc-a3a78902654f", 00:18:50.903 "is_configured": true, 00:18:50.903 "data_offset": 0, 00:18:50.903 "data_size": 65536 00:18:50.903 }, 00:18:50.903 { 00:18:50.903 "name": "BaseBdev3", 00:18:50.903 "uuid": "a336bec1-1a1a-5f2b-85fe-d0c1450c35f4", 00:18:50.903 "is_configured": true, 00:18:50.903 "data_offset": 0, 00:18:50.903 "data_size": 65536 00:18:50.903 } 00:18:50.903 ] 00:18:50.903 }' 00:18:50.903 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.161 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.161 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.162 [2024-11-27 08:50:47.746881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.162 [2024-11-27 08:50:47.823368] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:51.162 [2024-11-27 08:50:47.823478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.162 [2024-11-27 08:50:47.823512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.162 [2024-11-27 08:50:47.823526] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.162 "name": "raid_bdev1", 00:18:51.162 "uuid": "9b44bab5-049a-4d67-8dda-851df3db570a", 00:18:51.162 "strip_size_kb": 64, 00:18:51.162 "state": "online", 00:18:51.162 "raid_level": "raid5f", 00:18:51.162 "superblock": false, 00:18:51.162 "num_base_bdevs": 3, 00:18:51.162 "num_base_bdevs_discovered": 2, 00:18:51.162 "num_base_bdevs_operational": 2, 00:18:51.162 "base_bdevs_list": [ 00:18:51.162 { 00:18:51.162 "name": null, 00:18:51.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.162 "is_configured": false, 00:18:51.162 "data_offset": 0, 00:18:51.162 "data_size": 65536 00:18:51.162 }, 00:18:51.162 { 00:18:51.162 "name": "BaseBdev2", 00:18:51.162 "uuid": "b77eab0c-58fd-5e96-a6dc-a3a78902654f", 00:18:51.162 "is_configured": true, 00:18:51.162 "data_offset": 0, 00:18:51.162 "data_size": 65536 00:18:51.162 }, 00:18:51.162 { 00:18:51.162 "name": "BaseBdev3", 00:18:51.162 "uuid": "a336bec1-1a1a-5f2b-85fe-d0c1450c35f4", 00:18:51.162 "is_configured": true, 00:18:51.162 "data_offset": 0, 00:18:51.162 "data_size": 65536 00:18:51.162 } 00:18:51.162 ] 00:18:51.162 }' 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.162 08:50:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.728 08:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.728 08:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.728 08:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.728 08:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.728 08:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.728 08:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.728 08:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.728 08:50:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.728 08:50:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.728 08:50:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.728 08:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.728 "name": "raid_bdev1", 00:18:51.728 "uuid": "9b44bab5-049a-4d67-8dda-851df3db570a", 00:18:51.728 "strip_size_kb": 64, 00:18:51.728 "state": "online", 00:18:51.728 "raid_level": "raid5f", 00:18:51.728 "superblock": false, 00:18:51.728 "num_base_bdevs": 3, 00:18:51.728 "num_base_bdevs_discovered": 2, 00:18:51.728 "num_base_bdevs_operational": 2, 00:18:51.728 "base_bdevs_list": [ 00:18:51.728 { 00:18:51.728 "name": null, 00:18:51.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.729 "is_configured": false, 00:18:51.729 "data_offset": 0, 00:18:51.729 "data_size": 65536 00:18:51.729 }, 00:18:51.729 { 00:18:51.729 "name": "BaseBdev2", 00:18:51.729 "uuid": "b77eab0c-58fd-5e96-a6dc-a3a78902654f", 00:18:51.729 "is_configured": true, 00:18:51.729 "data_offset": 0, 00:18:51.729 "data_size": 65536 00:18:51.729 }, 00:18:51.729 { 00:18:51.729 "name": "BaseBdev3", 00:18:51.729 "uuid": "a336bec1-1a1a-5f2b-85fe-d0c1450c35f4", 00:18:51.729 "is_configured": true, 00:18:51.729 "data_offset": 0, 00:18:51.729 "data_size": 65536 00:18:51.729 } 00:18:51.729 ] 00:18:51.729 }' 00:18:51.729 08:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.729 08:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.729 08:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.986 08:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.986 08:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:51.986 08:50:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.986 08:50:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.986 [2024-11-27 08:50:48.490394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:51.986 [2024-11-27 08:50:48.505864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:51.986 08:50:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.986 08:50:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:51.986 [2024-11-27 08:50:48.513474] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.922 "name": "raid_bdev1", 00:18:52.922 "uuid": "9b44bab5-049a-4d67-8dda-851df3db570a", 00:18:52.922 "strip_size_kb": 64, 00:18:52.922 "state": "online", 00:18:52.922 "raid_level": "raid5f", 00:18:52.922 "superblock": false, 00:18:52.922 "num_base_bdevs": 3, 00:18:52.922 "num_base_bdevs_discovered": 3, 00:18:52.922 "num_base_bdevs_operational": 3, 00:18:52.922 "process": { 00:18:52.922 "type": "rebuild", 00:18:52.922 "target": "spare", 00:18:52.922 "progress": { 00:18:52.922 "blocks": 18432, 00:18:52.922 "percent": 14 00:18:52.922 } 00:18:52.922 }, 00:18:52.922 "base_bdevs_list": [ 00:18:52.922 { 00:18:52.922 "name": "spare", 00:18:52.922 "uuid": "e451e2cd-bb9d-5b23-9bf6-2969ccd118f0", 00:18:52.922 "is_configured": true, 00:18:52.922 "data_offset": 0, 00:18:52.922 "data_size": 65536 00:18:52.922 }, 00:18:52.922 { 00:18:52.922 "name": "BaseBdev2", 00:18:52.922 "uuid": "b77eab0c-58fd-5e96-a6dc-a3a78902654f", 00:18:52.922 "is_configured": true, 00:18:52.922 "data_offset": 0, 00:18:52.922 "data_size": 65536 00:18:52.922 }, 00:18:52.922 { 00:18:52.922 "name": "BaseBdev3", 00:18:52.922 "uuid": "a336bec1-1a1a-5f2b-85fe-d0c1450c35f4", 00:18:52.922 "is_configured": true, 00:18:52.922 "data_offset": 0, 00:18:52.922 "data_size": 65536 00:18:52.922 } 00:18:52.922 ] 00:18:52.922 }' 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:52.922 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=603 00:18:52.923 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:52.923 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.923 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.923 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.923 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.923 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.923 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.923 08:50:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.923 08:50:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.923 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.181 08:50:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.181 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.181 "name": "raid_bdev1", 00:18:53.181 "uuid": "9b44bab5-049a-4d67-8dda-851df3db570a", 00:18:53.181 "strip_size_kb": 64, 00:18:53.181 "state": "online", 00:18:53.181 "raid_level": "raid5f", 00:18:53.181 "superblock": false, 00:18:53.181 "num_base_bdevs": 3, 00:18:53.181 "num_base_bdevs_discovered": 3, 00:18:53.181 "num_base_bdevs_operational": 3, 00:18:53.181 "process": { 00:18:53.181 "type": "rebuild", 00:18:53.181 "target": "spare", 00:18:53.181 "progress": { 00:18:53.181 "blocks": 22528, 00:18:53.181 "percent": 17 00:18:53.181 } 00:18:53.181 }, 00:18:53.181 "base_bdevs_list": [ 00:18:53.181 { 00:18:53.181 "name": "spare", 00:18:53.181 "uuid": "e451e2cd-bb9d-5b23-9bf6-2969ccd118f0", 00:18:53.181 "is_configured": true, 00:18:53.181 "data_offset": 0, 00:18:53.181 "data_size": 65536 00:18:53.181 }, 00:18:53.181 { 00:18:53.181 "name": "BaseBdev2", 00:18:53.181 "uuid": "b77eab0c-58fd-5e96-a6dc-a3a78902654f", 00:18:53.181 "is_configured": true, 00:18:53.181 "data_offset": 0, 00:18:53.181 "data_size": 65536 00:18:53.181 }, 00:18:53.181 { 00:18:53.181 "name": "BaseBdev3", 00:18:53.181 "uuid": "a336bec1-1a1a-5f2b-85fe-d0c1450c35f4", 00:18:53.181 "is_configured": true, 00:18:53.181 "data_offset": 0, 00:18:53.181 "data_size": 65536 00:18:53.181 } 00:18:53.181 ] 00:18:53.181 }' 00:18:53.181 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.181 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.181 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.181 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.181 08:50:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:54.114 08:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:54.114 08:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.114 08:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.114 08:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.114 08:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.114 08:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.114 08:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.114 08:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.114 08:50:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.114 08:50:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.114 08:50:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.114 08:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.114 "name": "raid_bdev1", 00:18:54.114 "uuid": "9b44bab5-049a-4d67-8dda-851df3db570a", 00:18:54.114 "strip_size_kb": 64, 00:18:54.114 "state": "online", 00:18:54.114 "raid_level": "raid5f", 00:18:54.114 "superblock": false, 00:18:54.114 "num_base_bdevs": 3, 00:18:54.114 "num_base_bdevs_discovered": 3, 00:18:54.114 "num_base_bdevs_operational": 3, 00:18:54.114 "process": { 00:18:54.114 "type": "rebuild", 00:18:54.114 "target": "spare", 00:18:54.114 "progress": { 00:18:54.114 "blocks": 45056, 00:18:54.114 "percent": 34 00:18:54.114 } 00:18:54.114 }, 00:18:54.114 "base_bdevs_list": [ 00:18:54.114 { 00:18:54.114 "name": "spare", 00:18:54.114 "uuid": "e451e2cd-bb9d-5b23-9bf6-2969ccd118f0", 00:18:54.114 "is_configured": true, 00:18:54.114 "data_offset": 0, 00:18:54.114 "data_size": 65536 00:18:54.114 }, 00:18:54.114 { 00:18:54.114 "name": "BaseBdev2", 00:18:54.114 "uuid": "b77eab0c-58fd-5e96-a6dc-a3a78902654f", 00:18:54.114 "is_configured": true, 00:18:54.114 "data_offset": 0, 00:18:54.114 "data_size": 65536 00:18:54.114 }, 00:18:54.114 { 00:18:54.114 "name": "BaseBdev3", 00:18:54.114 "uuid": "a336bec1-1a1a-5f2b-85fe-d0c1450c35f4", 00:18:54.114 "is_configured": true, 00:18:54.114 "data_offset": 0, 00:18:54.114 "data_size": 65536 00:18:54.114 } 00:18:54.114 ] 00:18:54.114 }' 00:18:54.114 08:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.373 08:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.373 08:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.373 08:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.373 08:50:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:55.307 08:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:55.307 08:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.307 08:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.307 08:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.307 08:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.307 08:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.307 08:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.307 08:50:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.307 08:50:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.307 08:50:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.307 08:50:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.307 08:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.307 "name": "raid_bdev1", 00:18:55.307 "uuid": "9b44bab5-049a-4d67-8dda-851df3db570a", 00:18:55.307 "strip_size_kb": 64, 00:18:55.307 "state": "online", 00:18:55.307 "raid_level": "raid5f", 00:18:55.307 "superblock": false, 00:18:55.307 "num_base_bdevs": 3, 00:18:55.307 "num_base_bdevs_discovered": 3, 00:18:55.307 "num_base_bdevs_operational": 3, 00:18:55.307 "process": { 00:18:55.307 "type": "rebuild", 00:18:55.307 "target": "spare", 00:18:55.307 "progress": { 00:18:55.307 "blocks": 69632, 00:18:55.307 "percent": 53 00:18:55.307 } 00:18:55.307 }, 00:18:55.307 "base_bdevs_list": [ 00:18:55.307 { 00:18:55.307 "name": "spare", 00:18:55.307 "uuid": "e451e2cd-bb9d-5b23-9bf6-2969ccd118f0", 00:18:55.307 "is_configured": true, 00:18:55.307 "data_offset": 0, 00:18:55.307 "data_size": 65536 00:18:55.307 }, 00:18:55.307 { 00:18:55.307 "name": "BaseBdev2", 00:18:55.307 "uuid": "b77eab0c-58fd-5e96-a6dc-a3a78902654f", 00:18:55.307 "is_configured": true, 00:18:55.307 "data_offset": 0, 00:18:55.307 "data_size": 65536 00:18:55.307 }, 00:18:55.307 { 00:18:55.307 "name": "BaseBdev3", 00:18:55.307 "uuid": "a336bec1-1a1a-5f2b-85fe-d0c1450c35f4", 00:18:55.307 "is_configured": true, 00:18:55.307 "data_offset": 0, 00:18:55.307 "data_size": 65536 00:18:55.307 } 00:18:55.307 ] 00:18:55.307 }' 00:18:55.307 08:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.565 08:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.566 08:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.566 08:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.566 08:50:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:56.501 08:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:56.501 08:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.501 08:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.501 08:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.501 08:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.501 08:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.501 08:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.501 08:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.501 08:50:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.501 08:50:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.501 08:50:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.501 08:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.501 "name": "raid_bdev1", 00:18:56.501 "uuid": "9b44bab5-049a-4d67-8dda-851df3db570a", 00:18:56.501 "strip_size_kb": 64, 00:18:56.501 "state": "online", 00:18:56.501 "raid_level": "raid5f", 00:18:56.501 "superblock": false, 00:18:56.501 "num_base_bdevs": 3, 00:18:56.501 "num_base_bdevs_discovered": 3, 00:18:56.501 "num_base_bdevs_operational": 3, 00:18:56.501 "process": { 00:18:56.501 "type": "rebuild", 00:18:56.501 "target": "spare", 00:18:56.501 "progress": { 00:18:56.501 "blocks": 92160, 00:18:56.501 "percent": 70 00:18:56.501 } 00:18:56.501 }, 00:18:56.501 "base_bdevs_list": [ 00:18:56.501 { 00:18:56.501 "name": "spare", 00:18:56.501 "uuid": "e451e2cd-bb9d-5b23-9bf6-2969ccd118f0", 00:18:56.501 "is_configured": true, 00:18:56.501 "data_offset": 0, 00:18:56.501 "data_size": 65536 00:18:56.501 }, 00:18:56.501 { 00:18:56.501 "name": "BaseBdev2", 00:18:56.501 "uuid": "b77eab0c-58fd-5e96-a6dc-a3a78902654f", 00:18:56.501 "is_configured": true, 00:18:56.501 "data_offset": 0, 00:18:56.501 "data_size": 65536 00:18:56.501 }, 00:18:56.501 { 00:18:56.501 "name": "BaseBdev3", 00:18:56.501 "uuid": "a336bec1-1a1a-5f2b-85fe-d0c1450c35f4", 00:18:56.501 "is_configured": true, 00:18:56.501 "data_offset": 0, 00:18:56.501 "data_size": 65536 00:18:56.501 } 00:18:56.501 ] 00:18:56.501 }' 00:18:56.501 08:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.759 08:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.759 08:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.759 08:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.759 08:50:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:57.692 08:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:57.692 08:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.692 08:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.692 08:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.692 08:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.692 08:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.692 08:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.692 08:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.692 08:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.692 08:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.692 08:50:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.692 08:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.692 "name": "raid_bdev1", 00:18:57.692 "uuid": "9b44bab5-049a-4d67-8dda-851df3db570a", 00:18:57.692 "strip_size_kb": 64, 00:18:57.692 "state": "online", 00:18:57.692 "raid_level": "raid5f", 00:18:57.692 "superblock": false, 00:18:57.692 "num_base_bdevs": 3, 00:18:57.692 "num_base_bdevs_discovered": 3, 00:18:57.692 "num_base_bdevs_operational": 3, 00:18:57.692 "process": { 00:18:57.692 "type": "rebuild", 00:18:57.693 "target": "spare", 00:18:57.693 "progress": { 00:18:57.693 "blocks": 116736, 00:18:57.693 "percent": 89 00:18:57.693 } 00:18:57.693 }, 00:18:57.693 "base_bdevs_list": [ 00:18:57.693 { 00:18:57.693 "name": "spare", 00:18:57.693 "uuid": "e451e2cd-bb9d-5b23-9bf6-2969ccd118f0", 00:18:57.693 "is_configured": true, 00:18:57.693 "data_offset": 0, 00:18:57.693 "data_size": 65536 00:18:57.693 }, 00:18:57.693 { 00:18:57.693 "name": "BaseBdev2", 00:18:57.693 "uuid": "b77eab0c-58fd-5e96-a6dc-a3a78902654f", 00:18:57.693 "is_configured": true, 00:18:57.693 "data_offset": 0, 00:18:57.693 "data_size": 65536 00:18:57.693 }, 00:18:57.693 { 00:18:57.693 "name": "BaseBdev3", 00:18:57.693 "uuid": "a336bec1-1a1a-5f2b-85fe-d0c1450c35f4", 00:18:57.693 "is_configured": true, 00:18:57.693 "data_offset": 0, 00:18:57.693 "data_size": 65536 00:18:57.693 } 00:18:57.693 ] 00:18:57.693 }' 00:18:57.693 08:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.693 08:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.693 08:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.958 08:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.958 08:50:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:58.538 [2024-11-27 08:50:55.003273] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:58.538 [2024-11-27 08:50:55.003437] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:58.538 [2024-11-27 08:50:55.003507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.797 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:58.797 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.797 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.797 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.797 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.797 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.797 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.797 08:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.797 08:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.797 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.797 08:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.797 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.797 "name": "raid_bdev1", 00:18:58.797 "uuid": "9b44bab5-049a-4d67-8dda-851df3db570a", 00:18:58.797 "strip_size_kb": 64, 00:18:58.797 "state": "online", 00:18:58.797 "raid_level": "raid5f", 00:18:58.797 "superblock": false, 00:18:58.797 "num_base_bdevs": 3, 00:18:58.797 "num_base_bdevs_discovered": 3, 00:18:58.797 "num_base_bdevs_operational": 3, 00:18:58.797 "base_bdevs_list": [ 00:18:58.797 { 00:18:58.797 "name": "spare", 00:18:58.797 "uuid": "e451e2cd-bb9d-5b23-9bf6-2969ccd118f0", 00:18:58.797 "is_configured": true, 00:18:58.797 "data_offset": 0, 00:18:58.797 "data_size": 65536 00:18:58.797 }, 00:18:58.797 { 00:18:58.797 "name": "BaseBdev2", 00:18:58.797 "uuid": "b77eab0c-58fd-5e96-a6dc-a3a78902654f", 00:18:58.797 "is_configured": true, 00:18:58.797 "data_offset": 0, 00:18:58.797 "data_size": 65536 00:18:58.797 }, 00:18:58.797 { 00:18:58.797 "name": "BaseBdev3", 00:18:58.797 "uuid": "a336bec1-1a1a-5f2b-85fe-d0c1450c35f4", 00:18:58.797 "is_configured": true, 00:18:58.797 "data_offset": 0, 00:18:58.797 "data_size": 65536 00:18:58.797 } 00:18:58.797 ] 00:18:58.797 }' 00:18:58.797 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.060 "name": "raid_bdev1", 00:18:59.060 "uuid": "9b44bab5-049a-4d67-8dda-851df3db570a", 00:18:59.060 "strip_size_kb": 64, 00:18:59.060 "state": "online", 00:18:59.060 "raid_level": "raid5f", 00:18:59.060 "superblock": false, 00:18:59.060 "num_base_bdevs": 3, 00:18:59.060 "num_base_bdevs_discovered": 3, 00:18:59.060 "num_base_bdevs_operational": 3, 00:18:59.060 "base_bdevs_list": [ 00:18:59.060 { 00:18:59.060 "name": "spare", 00:18:59.060 "uuid": "e451e2cd-bb9d-5b23-9bf6-2969ccd118f0", 00:18:59.060 "is_configured": true, 00:18:59.060 "data_offset": 0, 00:18:59.060 "data_size": 65536 00:18:59.060 }, 00:18:59.060 { 00:18:59.060 "name": "BaseBdev2", 00:18:59.060 "uuid": "b77eab0c-58fd-5e96-a6dc-a3a78902654f", 00:18:59.060 "is_configured": true, 00:18:59.060 "data_offset": 0, 00:18:59.060 "data_size": 65536 00:18:59.060 }, 00:18:59.060 { 00:18:59.060 "name": "BaseBdev3", 00:18:59.060 "uuid": "a336bec1-1a1a-5f2b-85fe-d0c1450c35f4", 00:18:59.060 "is_configured": true, 00:18:59.060 "data_offset": 0, 00:18:59.060 "data_size": 65536 00:18:59.060 } 00:18:59.060 ] 00:18:59.060 }' 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.060 08:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.318 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.318 "name": "raid_bdev1", 00:18:59.318 "uuid": "9b44bab5-049a-4d67-8dda-851df3db570a", 00:18:59.318 "strip_size_kb": 64, 00:18:59.318 "state": "online", 00:18:59.318 "raid_level": "raid5f", 00:18:59.318 "superblock": false, 00:18:59.318 "num_base_bdevs": 3, 00:18:59.318 "num_base_bdevs_discovered": 3, 00:18:59.318 "num_base_bdevs_operational": 3, 00:18:59.318 "base_bdevs_list": [ 00:18:59.318 { 00:18:59.318 "name": "spare", 00:18:59.318 "uuid": "e451e2cd-bb9d-5b23-9bf6-2969ccd118f0", 00:18:59.318 "is_configured": true, 00:18:59.318 "data_offset": 0, 00:18:59.318 "data_size": 65536 00:18:59.318 }, 00:18:59.318 { 00:18:59.318 "name": "BaseBdev2", 00:18:59.318 "uuid": "b77eab0c-58fd-5e96-a6dc-a3a78902654f", 00:18:59.318 "is_configured": true, 00:18:59.318 "data_offset": 0, 00:18:59.318 "data_size": 65536 00:18:59.318 }, 00:18:59.318 { 00:18:59.319 "name": "BaseBdev3", 00:18:59.319 "uuid": "a336bec1-1a1a-5f2b-85fe-d0c1450c35f4", 00:18:59.319 "is_configured": true, 00:18:59.319 "data_offset": 0, 00:18:59.319 "data_size": 65536 00:18:59.319 } 00:18:59.319 ] 00:18:59.319 }' 00:18:59.319 08:50:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.319 08:50:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.577 [2024-11-27 08:50:56.265289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:59.577 [2024-11-27 08:50:56.265347] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:59.577 [2024-11-27 08:50:56.265479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.577 [2024-11-27 08:50:56.265604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.577 [2024-11-27 08:50:56.265632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:59.577 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:00.143 /dev/nbd0 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local i 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # break 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:00.143 1+0 records in 00:19:00.143 1+0 records out 00:19:00.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296553 s, 13.8 MB/s 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # size=4096 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # return 0 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:00.143 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:00.401 /dev/nbd1 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local i 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # break 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:00.401 1+0 records in 00:19:00.401 1+0 records out 00:19:00.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447083 s, 9.2 MB/s 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # size=4096 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # return 0 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:00.401 08:50:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:00.401 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:00.401 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:00.401 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:00.401 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:00.401 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:00.401 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:00.401 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:00.966 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:00.966 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:00.966 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:00.966 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:00.966 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:00.966 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:00.966 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:00.966 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:00.966 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:00.966 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82087 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # '[' -z 82087 ']' 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # kill -0 82087 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # uname 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 82087 00:19:01.225 killing process with pid 82087 00:19:01.225 Received shutdown signal, test time was about 60.000000 seconds 00:19:01.225 00:19:01.225 Latency(us) 00:19:01.225 [2024-11-27T08:50:57.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.225 [2024-11-27T08:50:57.985Z] =================================================================================================================== 00:19:01.225 [2024-11-27T08:50:57.985Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 82087' 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # kill 82087 00:19:01.225 08:50:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@975 -- # wait 82087 00:19:01.225 [2024-11-27 08:50:57.791306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:01.483 [2024-11-27 08:50:58.157048] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:02.859 08:50:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:02.859 00:19:02.859 real 0m16.331s 00:19:02.859 user 0m20.686s 00:19:02.859 sys 0m2.045s 00:19:02.859 08:50:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:19:02.859 08:50:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.859 ************************************ 00:19:02.859 END TEST raid5f_rebuild_test 00:19:02.859 ************************************ 00:19:02.859 08:50:59 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:19:02.859 08:50:59 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:19:02.859 08:50:59 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:19:02.859 08:50:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:02.859 ************************************ 00:19:02.859 START TEST raid5f_rebuild_test_sb 00:19:02.859 ************************************ 00:19:02.859 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # raid_rebuild_test raid5f 3 true false true 00:19:02.859 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:02.859 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:19:02.859 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:02.859 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:02.859 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:02.859 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:02.859 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:02.859 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:02.859 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82529 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82529 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@832 -- # '[' -z 82529 ']' 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local max_retries=100 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@841 -- # xtrace_disable 00:19:02.860 08:50:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.860 [2024-11-27 08:50:59.432506] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:19:02.860 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:02.860 Zero copy mechanism will not be used. 00:19:02.860 [2024-11-27 08:50:59.432723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82529 ] 00:19:02.860 [2024-11-27 08:50:59.613557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.118 [2024-11-27 08:50:59.760159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.376 [2024-11-27 08:50:59.982471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:03.376 [2024-11-27 08:50:59.982564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:03.635 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:19:03.635 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # return 0 00:19:03.635 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:03.635 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:03.635 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.635 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.894 BaseBdev1_malloc 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.894 [2024-11-27 08:51:00.446600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:03.894 [2024-11-27 08:51:00.446697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.894 [2024-11-27 08:51:00.446738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:03.894 [2024-11-27 08:51:00.446759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.894 [2024-11-27 08:51:00.449732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.894 [2024-11-27 08:51:00.449786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:03.894 BaseBdev1 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.894 BaseBdev2_malloc 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.894 [2024-11-27 08:51:00.506193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:03.894 [2024-11-27 08:51:00.506282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.894 [2024-11-27 08:51:00.506317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:03.894 [2024-11-27 08:51:00.506353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.894 [2024-11-27 08:51:00.509291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.894 [2024-11-27 08:51:00.509356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:03.894 BaseBdev2 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.894 BaseBdev3_malloc 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.894 [2024-11-27 08:51:00.572678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:03.894 [2024-11-27 08:51:00.572764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.894 [2024-11-27 08:51:00.572803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:03.894 [2024-11-27 08:51:00.572824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.894 [2024-11-27 08:51:00.575728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.894 [2024-11-27 08:51:00.575787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:03.894 BaseBdev3 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.894 spare_malloc 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.894 spare_delay 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.894 [2024-11-27 08:51:00.636253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:03.894 [2024-11-27 08:51:00.636329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.894 [2024-11-27 08:51:00.636375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:03.894 [2024-11-27 08:51:00.636396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.894 [2024-11-27 08:51:00.639408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.894 [2024-11-27 08:51:00.639465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:03.894 spare 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.894 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.894 [2024-11-27 08:51:00.644443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.894 [2024-11-27 08:51:00.647021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:03.894 [2024-11-27 08:51:00.647126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:03.894 [2024-11-27 08:51:00.647402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:03.894 [2024-11-27 08:51:00.647435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:03.894 [2024-11-27 08:51:00.647784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:04.153 [2024-11-27 08:51:00.653018] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:04.153 [2024-11-27 08:51:00.653058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:04.153 [2024-11-27 08:51:00.653302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.153 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.153 "name": "raid_bdev1", 00:19:04.153 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:04.153 "strip_size_kb": 64, 00:19:04.153 "state": "online", 00:19:04.153 "raid_level": "raid5f", 00:19:04.154 "superblock": true, 00:19:04.154 "num_base_bdevs": 3, 00:19:04.154 "num_base_bdevs_discovered": 3, 00:19:04.154 "num_base_bdevs_operational": 3, 00:19:04.154 "base_bdevs_list": [ 00:19:04.154 { 00:19:04.154 "name": "BaseBdev1", 00:19:04.154 "uuid": "13d9ff00-9295-53e1-93e1-3e775e5d9aab", 00:19:04.154 "is_configured": true, 00:19:04.154 "data_offset": 2048, 00:19:04.154 "data_size": 63488 00:19:04.154 }, 00:19:04.154 { 00:19:04.154 "name": "BaseBdev2", 00:19:04.154 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:04.154 "is_configured": true, 00:19:04.154 "data_offset": 2048, 00:19:04.154 "data_size": 63488 00:19:04.154 }, 00:19:04.154 { 00:19:04.154 "name": "BaseBdev3", 00:19:04.154 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:04.154 "is_configured": true, 00:19:04.154 "data_offset": 2048, 00:19:04.154 "data_size": 63488 00:19:04.154 } 00:19:04.154 ] 00:19:04.154 }' 00:19:04.154 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.154 08:51:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.412 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.412 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.412 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:04.412 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.412 [2024-11-27 08:51:01.167997] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:04.670 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:04.928 [2024-11-27 08:51:01.563911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:04.928 /dev/nbd0 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local i 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # break 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:04.928 1+0 records in 00:19:04.928 1+0 records out 00:19:04.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026951 s, 15.2 MB/s 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # size=4096 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # return 0 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:19:04.928 08:51:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:19:05.494 496+0 records in 00:19:05.494 496+0 records out 00:19:05.494 65011712 bytes (65 MB, 62 MiB) copied, 0.43244 s, 150 MB/s 00:19:05.494 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:05.494 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:05.494 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:05.494 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:05.494 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:05.494 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:05.494 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:05.754 [2024-11-27 08:51:02.364122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.754 [2024-11-27 08:51:02.398465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.754 "name": "raid_bdev1", 00:19:05.754 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:05.754 "strip_size_kb": 64, 00:19:05.754 "state": "online", 00:19:05.754 "raid_level": "raid5f", 00:19:05.754 "superblock": true, 00:19:05.754 "num_base_bdevs": 3, 00:19:05.754 "num_base_bdevs_discovered": 2, 00:19:05.754 "num_base_bdevs_operational": 2, 00:19:05.754 "base_bdevs_list": [ 00:19:05.754 { 00:19:05.754 "name": null, 00:19:05.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.754 "is_configured": false, 00:19:05.754 "data_offset": 0, 00:19:05.754 "data_size": 63488 00:19:05.754 }, 00:19:05.754 { 00:19:05.754 "name": "BaseBdev2", 00:19:05.754 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:05.754 "is_configured": true, 00:19:05.754 "data_offset": 2048, 00:19:05.754 "data_size": 63488 00:19:05.754 }, 00:19:05.754 { 00:19:05.754 "name": "BaseBdev3", 00:19:05.754 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:05.754 "is_configured": true, 00:19:05.754 "data_offset": 2048, 00:19:05.754 "data_size": 63488 00:19:05.754 } 00:19:05.754 ] 00:19:05.754 }' 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.754 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.322 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:06.322 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.322 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.322 [2024-11-27 08:51:02.906623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:06.322 [2024-11-27 08:51:02.922767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:19:06.322 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.322 08:51:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:06.322 [2024-11-27 08:51:02.930416] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:07.257 08:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.257 08:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.257 08:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.257 08:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.257 08:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.257 08:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.257 08:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.257 08:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.257 08:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.257 08:51:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.257 08:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.257 "name": "raid_bdev1", 00:19:07.257 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:07.257 "strip_size_kb": 64, 00:19:07.257 "state": "online", 00:19:07.257 "raid_level": "raid5f", 00:19:07.257 "superblock": true, 00:19:07.257 "num_base_bdevs": 3, 00:19:07.257 "num_base_bdevs_discovered": 3, 00:19:07.257 "num_base_bdevs_operational": 3, 00:19:07.257 "process": { 00:19:07.257 "type": "rebuild", 00:19:07.257 "target": "spare", 00:19:07.257 "progress": { 00:19:07.257 "blocks": 18432, 00:19:07.257 "percent": 14 00:19:07.257 } 00:19:07.257 }, 00:19:07.257 "base_bdevs_list": [ 00:19:07.257 { 00:19:07.257 "name": "spare", 00:19:07.257 "uuid": "80a5cdc3-df0d-5933-8914-19d92c8002e6", 00:19:07.257 "is_configured": true, 00:19:07.257 "data_offset": 2048, 00:19:07.257 "data_size": 63488 00:19:07.257 }, 00:19:07.257 { 00:19:07.257 "name": "BaseBdev2", 00:19:07.257 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:07.257 "is_configured": true, 00:19:07.257 "data_offset": 2048, 00:19:07.257 "data_size": 63488 00:19:07.257 }, 00:19:07.257 { 00:19:07.257 "name": "BaseBdev3", 00:19:07.257 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:07.257 "is_configured": true, 00:19:07.257 "data_offset": 2048, 00:19:07.257 "data_size": 63488 00:19:07.257 } 00:19:07.257 ] 00:19:07.257 }' 00:19:07.257 08:51:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.515 [2024-11-27 08:51:04.096164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:07.515 [2024-11-27 08:51:04.146995] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:07.515 [2024-11-27 08:51:04.147088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.515 [2024-11-27 08:51:04.147120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:07.515 [2024-11-27 08:51:04.147134] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.515 "name": "raid_bdev1", 00:19:07.515 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:07.515 "strip_size_kb": 64, 00:19:07.515 "state": "online", 00:19:07.515 "raid_level": "raid5f", 00:19:07.515 "superblock": true, 00:19:07.515 "num_base_bdevs": 3, 00:19:07.515 "num_base_bdevs_discovered": 2, 00:19:07.515 "num_base_bdevs_operational": 2, 00:19:07.515 "base_bdevs_list": [ 00:19:07.515 { 00:19:07.515 "name": null, 00:19:07.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.515 "is_configured": false, 00:19:07.515 "data_offset": 0, 00:19:07.515 "data_size": 63488 00:19:07.515 }, 00:19:07.515 { 00:19:07.515 "name": "BaseBdev2", 00:19:07.515 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:07.515 "is_configured": true, 00:19:07.515 "data_offset": 2048, 00:19:07.515 "data_size": 63488 00:19:07.515 }, 00:19:07.515 { 00:19:07.515 "name": "BaseBdev3", 00:19:07.515 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:07.515 "is_configured": true, 00:19:07.515 "data_offset": 2048, 00:19:07.515 "data_size": 63488 00:19:07.515 } 00:19:07.515 ] 00:19:07.515 }' 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.515 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.080 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.081 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.081 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:08.081 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:08.081 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.081 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.081 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.081 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.081 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.081 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.081 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.081 "name": "raid_bdev1", 00:19:08.081 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:08.081 "strip_size_kb": 64, 00:19:08.081 "state": "online", 00:19:08.081 "raid_level": "raid5f", 00:19:08.081 "superblock": true, 00:19:08.081 "num_base_bdevs": 3, 00:19:08.081 "num_base_bdevs_discovered": 2, 00:19:08.081 "num_base_bdevs_operational": 2, 00:19:08.081 "base_bdevs_list": [ 00:19:08.081 { 00:19:08.081 "name": null, 00:19:08.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.081 "is_configured": false, 00:19:08.081 "data_offset": 0, 00:19:08.081 "data_size": 63488 00:19:08.081 }, 00:19:08.081 { 00:19:08.081 "name": "BaseBdev2", 00:19:08.081 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:08.081 "is_configured": true, 00:19:08.081 "data_offset": 2048, 00:19:08.081 "data_size": 63488 00:19:08.081 }, 00:19:08.081 { 00:19:08.081 "name": "BaseBdev3", 00:19:08.081 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:08.081 "is_configured": true, 00:19:08.081 "data_offset": 2048, 00:19:08.081 "data_size": 63488 00:19:08.081 } 00:19:08.081 ] 00:19:08.081 }' 00:19:08.081 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.081 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:08.081 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.338 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:08.338 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:08.338 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.338 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.338 [2024-11-27 08:51:04.872667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:08.338 [2024-11-27 08:51:04.888097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:19:08.338 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.338 08:51:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:08.338 [2024-11-27 08:51:04.895658] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:09.272 08:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.272 08:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.272 08:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.272 08:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.272 08:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.272 08:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.272 08:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.272 08:51:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.272 08:51:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.272 08:51:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.272 08:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.272 "name": "raid_bdev1", 00:19:09.272 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:09.272 "strip_size_kb": 64, 00:19:09.272 "state": "online", 00:19:09.272 "raid_level": "raid5f", 00:19:09.272 "superblock": true, 00:19:09.272 "num_base_bdevs": 3, 00:19:09.272 "num_base_bdevs_discovered": 3, 00:19:09.272 "num_base_bdevs_operational": 3, 00:19:09.272 "process": { 00:19:09.272 "type": "rebuild", 00:19:09.272 "target": "spare", 00:19:09.272 "progress": { 00:19:09.272 "blocks": 18432, 00:19:09.272 "percent": 14 00:19:09.272 } 00:19:09.272 }, 00:19:09.272 "base_bdevs_list": [ 00:19:09.272 { 00:19:09.272 "name": "spare", 00:19:09.272 "uuid": "80a5cdc3-df0d-5933-8914-19d92c8002e6", 00:19:09.272 "is_configured": true, 00:19:09.272 "data_offset": 2048, 00:19:09.272 "data_size": 63488 00:19:09.272 }, 00:19:09.272 { 00:19:09.272 "name": "BaseBdev2", 00:19:09.272 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:09.272 "is_configured": true, 00:19:09.272 "data_offset": 2048, 00:19:09.272 "data_size": 63488 00:19:09.272 }, 00:19:09.272 { 00:19:09.272 "name": "BaseBdev3", 00:19:09.272 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:09.272 "is_configured": true, 00:19:09.272 "data_offset": 2048, 00:19:09.272 "data_size": 63488 00:19:09.272 } 00:19:09.272 ] 00:19:09.272 }' 00:19:09.272 08:51:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.272 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.272 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:09.531 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=620 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.531 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.531 "name": "raid_bdev1", 00:19:09.531 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:09.531 "strip_size_kb": 64, 00:19:09.531 "state": "online", 00:19:09.531 "raid_level": "raid5f", 00:19:09.531 "superblock": true, 00:19:09.531 "num_base_bdevs": 3, 00:19:09.531 "num_base_bdevs_discovered": 3, 00:19:09.531 "num_base_bdevs_operational": 3, 00:19:09.531 "process": { 00:19:09.531 "type": "rebuild", 00:19:09.531 "target": "spare", 00:19:09.531 "progress": { 00:19:09.531 "blocks": 22528, 00:19:09.531 "percent": 17 00:19:09.531 } 00:19:09.531 }, 00:19:09.531 "base_bdevs_list": [ 00:19:09.531 { 00:19:09.531 "name": "spare", 00:19:09.531 "uuid": "80a5cdc3-df0d-5933-8914-19d92c8002e6", 00:19:09.531 "is_configured": true, 00:19:09.531 "data_offset": 2048, 00:19:09.531 "data_size": 63488 00:19:09.531 }, 00:19:09.531 { 00:19:09.531 "name": "BaseBdev2", 00:19:09.531 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:09.531 "is_configured": true, 00:19:09.532 "data_offset": 2048, 00:19:09.532 "data_size": 63488 00:19:09.532 }, 00:19:09.532 { 00:19:09.532 "name": "BaseBdev3", 00:19:09.532 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:09.532 "is_configured": true, 00:19:09.532 "data_offset": 2048, 00:19:09.532 "data_size": 63488 00:19:09.532 } 00:19:09.532 ] 00:19:09.532 }' 00:19:09.532 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.532 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.532 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.532 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.532 08:51:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:10.461 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:10.461 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.461 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.461 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:10.461 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:10.461 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.719 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.719 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.719 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.719 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.719 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.719 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.719 "name": "raid_bdev1", 00:19:10.719 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:10.719 "strip_size_kb": 64, 00:19:10.719 "state": "online", 00:19:10.719 "raid_level": "raid5f", 00:19:10.719 "superblock": true, 00:19:10.719 "num_base_bdevs": 3, 00:19:10.719 "num_base_bdevs_discovered": 3, 00:19:10.719 "num_base_bdevs_operational": 3, 00:19:10.719 "process": { 00:19:10.719 "type": "rebuild", 00:19:10.719 "target": "spare", 00:19:10.719 "progress": { 00:19:10.719 "blocks": 47104, 00:19:10.719 "percent": 37 00:19:10.719 } 00:19:10.719 }, 00:19:10.719 "base_bdevs_list": [ 00:19:10.719 { 00:19:10.719 "name": "spare", 00:19:10.719 "uuid": "80a5cdc3-df0d-5933-8914-19d92c8002e6", 00:19:10.719 "is_configured": true, 00:19:10.719 "data_offset": 2048, 00:19:10.719 "data_size": 63488 00:19:10.719 }, 00:19:10.719 { 00:19:10.719 "name": "BaseBdev2", 00:19:10.719 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:10.719 "is_configured": true, 00:19:10.719 "data_offset": 2048, 00:19:10.719 "data_size": 63488 00:19:10.719 }, 00:19:10.719 { 00:19:10.719 "name": "BaseBdev3", 00:19:10.719 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:10.719 "is_configured": true, 00:19:10.719 "data_offset": 2048, 00:19:10.719 "data_size": 63488 00:19:10.719 } 00:19:10.719 ] 00:19:10.719 }' 00:19:10.719 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.719 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.719 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.719 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.719 08:51:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:11.652 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:11.652 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.652 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.652 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.652 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.652 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.652 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.652 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.652 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.652 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.652 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.910 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.910 "name": "raid_bdev1", 00:19:11.910 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:11.910 "strip_size_kb": 64, 00:19:11.910 "state": "online", 00:19:11.910 "raid_level": "raid5f", 00:19:11.910 "superblock": true, 00:19:11.910 "num_base_bdevs": 3, 00:19:11.910 "num_base_bdevs_discovered": 3, 00:19:11.910 "num_base_bdevs_operational": 3, 00:19:11.910 "process": { 00:19:11.910 "type": "rebuild", 00:19:11.910 "target": "spare", 00:19:11.910 "progress": { 00:19:11.910 "blocks": 69632, 00:19:11.910 "percent": 54 00:19:11.910 } 00:19:11.910 }, 00:19:11.910 "base_bdevs_list": [ 00:19:11.910 { 00:19:11.910 "name": "spare", 00:19:11.910 "uuid": "80a5cdc3-df0d-5933-8914-19d92c8002e6", 00:19:11.910 "is_configured": true, 00:19:11.910 "data_offset": 2048, 00:19:11.910 "data_size": 63488 00:19:11.910 }, 00:19:11.910 { 00:19:11.910 "name": "BaseBdev2", 00:19:11.910 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:11.910 "is_configured": true, 00:19:11.910 "data_offset": 2048, 00:19:11.910 "data_size": 63488 00:19:11.910 }, 00:19:11.910 { 00:19:11.910 "name": "BaseBdev3", 00:19:11.910 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:11.910 "is_configured": true, 00:19:11.910 "data_offset": 2048, 00:19:11.910 "data_size": 63488 00:19:11.910 } 00:19:11.910 ] 00:19:11.910 }' 00:19:11.910 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.910 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.910 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.910 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.910 08:51:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.948 "name": "raid_bdev1", 00:19:12.948 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:12.948 "strip_size_kb": 64, 00:19:12.948 "state": "online", 00:19:12.948 "raid_level": "raid5f", 00:19:12.948 "superblock": true, 00:19:12.948 "num_base_bdevs": 3, 00:19:12.948 "num_base_bdevs_discovered": 3, 00:19:12.948 "num_base_bdevs_operational": 3, 00:19:12.948 "process": { 00:19:12.948 "type": "rebuild", 00:19:12.948 "target": "spare", 00:19:12.948 "progress": { 00:19:12.948 "blocks": 92160, 00:19:12.948 "percent": 72 00:19:12.948 } 00:19:12.948 }, 00:19:12.948 "base_bdevs_list": [ 00:19:12.948 { 00:19:12.948 "name": "spare", 00:19:12.948 "uuid": "80a5cdc3-df0d-5933-8914-19d92c8002e6", 00:19:12.948 "is_configured": true, 00:19:12.948 "data_offset": 2048, 00:19:12.948 "data_size": 63488 00:19:12.948 }, 00:19:12.948 { 00:19:12.948 "name": "BaseBdev2", 00:19:12.948 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:12.948 "is_configured": true, 00:19:12.948 "data_offset": 2048, 00:19:12.948 "data_size": 63488 00:19:12.948 }, 00:19:12.948 { 00:19:12.948 "name": "BaseBdev3", 00:19:12.948 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:12.948 "is_configured": true, 00:19:12.948 "data_offset": 2048, 00:19:12.948 "data_size": 63488 00:19:12.948 } 00:19:12.948 ] 00:19:12.948 }' 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.948 08:51:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.322 "name": "raid_bdev1", 00:19:14.322 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:14.322 "strip_size_kb": 64, 00:19:14.322 "state": "online", 00:19:14.322 "raid_level": "raid5f", 00:19:14.322 "superblock": true, 00:19:14.322 "num_base_bdevs": 3, 00:19:14.322 "num_base_bdevs_discovered": 3, 00:19:14.322 "num_base_bdevs_operational": 3, 00:19:14.322 "process": { 00:19:14.322 "type": "rebuild", 00:19:14.322 "target": "spare", 00:19:14.322 "progress": { 00:19:14.322 "blocks": 116736, 00:19:14.322 "percent": 91 00:19:14.322 } 00:19:14.322 }, 00:19:14.322 "base_bdevs_list": [ 00:19:14.322 { 00:19:14.322 "name": "spare", 00:19:14.322 "uuid": "80a5cdc3-df0d-5933-8914-19d92c8002e6", 00:19:14.322 "is_configured": true, 00:19:14.322 "data_offset": 2048, 00:19:14.322 "data_size": 63488 00:19:14.322 }, 00:19:14.322 { 00:19:14.322 "name": "BaseBdev2", 00:19:14.322 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:14.322 "is_configured": true, 00:19:14.322 "data_offset": 2048, 00:19:14.322 "data_size": 63488 00:19:14.322 }, 00:19:14.322 { 00:19:14.322 "name": "BaseBdev3", 00:19:14.322 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:14.322 "is_configured": true, 00:19:14.322 "data_offset": 2048, 00:19:14.322 "data_size": 63488 00:19:14.322 } 00:19:14.322 ] 00:19:14.322 }' 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.322 08:51:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:14.580 [2024-11-27 08:51:11.183579] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:14.580 [2024-11-27 08:51:11.183729] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:14.580 [2024-11-27 08:51:11.183902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.146 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.146 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.146 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.146 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.146 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.146 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.146 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.146 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.146 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.146 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.146 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.146 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.146 "name": "raid_bdev1", 00:19:15.146 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:15.146 "strip_size_kb": 64, 00:19:15.146 "state": "online", 00:19:15.146 "raid_level": "raid5f", 00:19:15.146 "superblock": true, 00:19:15.146 "num_base_bdevs": 3, 00:19:15.146 "num_base_bdevs_discovered": 3, 00:19:15.146 "num_base_bdevs_operational": 3, 00:19:15.146 "base_bdevs_list": [ 00:19:15.146 { 00:19:15.146 "name": "spare", 00:19:15.146 "uuid": "80a5cdc3-df0d-5933-8914-19d92c8002e6", 00:19:15.146 "is_configured": true, 00:19:15.146 "data_offset": 2048, 00:19:15.146 "data_size": 63488 00:19:15.146 }, 00:19:15.146 { 00:19:15.146 "name": "BaseBdev2", 00:19:15.146 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:15.146 "is_configured": true, 00:19:15.146 "data_offset": 2048, 00:19:15.146 "data_size": 63488 00:19:15.146 }, 00:19:15.146 { 00:19:15.146 "name": "BaseBdev3", 00:19:15.146 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:15.146 "is_configured": true, 00:19:15.146 "data_offset": 2048, 00:19:15.146 "data_size": 63488 00:19:15.146 } 00:19:15.146 ] 00:19:15.146 }' 00:19:15.146 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.403 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:15.403 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.403 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:15.403 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:15.403 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:15.403 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.403 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:15.403 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:15.403 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.404 08:51:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.404 "name": "raid_bdev1", 00:19:15.404 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:15.404 "strip_size_kb": 64, 00:19:15.404 "state": "online", 00:19:15.404 "raid_level": "raid5f", 00:19:15.404 "superblock": true, 00:19:15.404 "num_base_bdevs": 3, 00:19:15.404 "num_base_bdevs_discovered": 3, 00:19:15.404 "num_base_bdevs_operational": 3, 00:19:15.404 "base_bdevs_list": [ 00:19:15.404 { 00:19:15.404 "name": "spare", 00:19:15.404 "uuid": "80a5cdc3-df0d-5933-8914-19d92c8002e6", 00:19:15.404 "is_configured": true, 00:19:15.404 "data_offset": 2048, 00:19:15.404 "data_size": 63488 00:19:15.404 }, 00:19:15.404 { 00:19:15.404 "name": "BaseBdev2", 00:19:15.404 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:15.404 "is_configured": true, 00:19:15.404 "data_offset": 2048, 00:19:15.404 "data_size": 63488 00:19:15.404 }, 00:19:15.404 { 00:19:15.404 "name": "BaseBdev3", 00:19:15.404 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:15.404 "is_configured": true, 00:19:15.404 "data_offset": 2048, 00:19:15.404 "data_size": 63488 00:19:15.404 } 00:19:15.404 ] 00:19:15.404 }' 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.404 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.662 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.662 "name": "raid_bdev1", 00:19:15.662 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:15.662 "strip_size_kb": 64, 00:19:15.662 "state": "online", 00:19:15.662 "raid_level": "raid5f", 00:19:15.662 "superblock": true, 00:19:15.662 "num_base_bdevs": 3, 00:19:15.662 "num_base_bdevs_discovered": 3, 00:19:15.662 "num_base_bdevs_operational": 3, 00:19:15.662 "base_bdevs_list": [ 00:19:15.662 { 00:19:15.662 "name": "spare", 00:19:15.662 "uuid": "80a5cdc3-df0d-5933-8914-19d92c8002e6", 00:19:15.662 "is_configured": true, 00:19:15.662 "data_offset": 2048, 00:19:15.662 "data_size": 63488 00:19:15.662 }, 00:19:15.662 { 00:19:15.662 "name": "BaseBdev2", 00:19:15.662 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:15.662 "is_configured": true, 00:19:15.662 "data_offset": 2048, 00:19:15.662 "data_size": 63488 00:19:15.662 }, 00:19:15.662 { 00:19:15.662 "name": "BaseBdev3", 00:19:15.662 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:15.662 "is_configured": true, 00:19:15.662 "data_offset": 2048, 00:19:15.662 "data_size": 63488 00:19:15.662 } 00:19:15.662 ] 00:19:15.662 }' 00:19:15.662 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.662 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.920 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:15.920 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.920 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.920 [2024-11-27 08:51:12.656807] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:15.920 [2024-11-27 08:51:12.656855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:15.920 [2024-11-27 08:51:12.656988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.920 [2024-11-27 08:51:12.657125] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.920 [2024-11-27 08:51:12.657154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:15.920 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.920 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.920 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.920 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:15.920 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.920 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.178 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:16.178 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:16.178 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:16.178 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:16.178 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:16.178 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:16.178 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:16.178 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:16.178 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:16.178 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:16.178 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:16.178 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.178 08:51:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:16.443 /dev/nbd0 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local i 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # break 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:16.443 1+0 records in 00:19:16.443 1+0 records out 00:19:16.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030315 s, 13.5 MB/s 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # size=4096 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # return 0 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.443 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:16.734 /dev/nbd1 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local i 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # break 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:16.734 1+0 records in 00:19:16.734 1+0 records out 00:19:16.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035516 s, 11.5 MB/s 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # size=4096 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # return 0 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.734 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:16.992 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:16.992 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:16.992 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:16.992 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:16.992 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:16.992 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:16.992 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:17.250 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:17.250 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:17.250 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:17.250 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.250 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.250 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:17.250 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:17.250 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.250 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:17.250 08:51:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.509 [2024-11-27 08:51:14.201035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:17.509 [2024-11-27 08:51:14.201116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.509 [2024-11-27 08:51:14.201152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:17.509 [2024-11-27 08:51:14.201172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.509 [2024-11-27 08:51:14.204370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.509 [2024-11-27 08:51:14.204418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:17.509 [2024-11-27 08:51:14.204536] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:17.509 [2024-11-27 08:51:14.204621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:17.509 [2024-11-27 08:51:14.204826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:17.509 [2024-11-27 08:51:14.204999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:17.509 spare 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.509 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.767 [2024-11-27 08:51:14.305174] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:17.767 [2024-11-27 08:51:14.305220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:17.767 [2024-11-27 08:51:14.305619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:19:17.767 [2024-11-27 08:51:14.310652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:17.767 [2024-11-27 08:51:14.310690] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:17.767 [2024-11-27 08:51:14.310931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.767 "name": "raid_bdev1", 00:19:17.767 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:17.767 "strip_size_kb": 64, 00:19:17.767 "state": "online", 00:19:17.767 "raid_level": "raid5f", 00:19:17.767 "superblock": true, 00:19:17.767 "num_base_bdevs": 3, 00:19:17.767 "num_base_bdevs_discovered": 3, 00:19:17.767 "num_base_bdevs_operational": 3, 00:19:17.767 "base_bdevs_list": [ 00:19:17.767 { 00:19:17.767 "name": "spare", 00:19:17.767 "uuid": "80a5cdc3-df0d-5933-8914-19d92c8002e6", 00:19:17.767 "is_configured": true, 00:19:17.767 "data_offset": 2048, 00:19:17.767 "data_size": 63488 00:19:17.767 }, 00:19:17.767 { 00:19:17.767 "name": "BaseBdev2", 00:19:17.767 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:17.767 "is_configured": true, 00:19:17.767 "data_offset": 2048, 00:19:17.767 "data_size": 63488 00:19:17.767 }, 00:19:17.767 { 00:19:17.767 "name": "BaseBdev3", 00:19:17.767 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:17.767 "is_configured": true, 00:19:17.767 "data_offset": 2048, 00:19:17.767 "data_size": 63488 00:19:17.767 } 00:19:17.767 ] 00:19:17.767 }' 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.767 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.333 "name": "raid_bdev1", 00:19:18.333 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:18.333 "strip_size_kb": 64, 00:19:18.333 "state": "online", 00:19:18.333 "raid_level": "raid5f", 00:19:18.333 "superblock": true, 00:19:18.333 "num_base_bdevs": 3, 00:19:18.333 "num_base_bdevs_discovered": 3, 00:19:18.333 "num_base_bdevs_operational": 3, 00:19:18.333 "base_bdevs_list": [ 00:19:18.333 { 00:19:18.333 "name": "spare", 00:19:18.333 "uuid": "80a5cdc3-df0d-5933-8914-19d92c8002e6", 00:19:18.333 "is_configured": true, 00:19:18.333 "data_offset": 2048, 00:19:18.333 "data_size": 63488 00:19:18.333 }, 00:19:18.333 { 00:19:18.333 "name": "BaseBdev2", 00:19:18.333 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:18.333 "is_configured": true, 00:19:18.333 "data_offset": 2048, 00:19:18.333 "data_size": 63488 00:19:18.333 }, 00:19:18.333 { 00:19:18.333 "name": "BaseBdev3", 00:19:18.333 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:18.333 "is_configured": true, 00:19:18.333 "data_offset": 2048, 00:19:18.333 "data_size": 63488 00:19:18.333 } 00:19:18.333 ] 00:19:18.333 }' 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.333 [2024-11-27 08:51:14.997101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.333 08:51:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.333 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.333 "name": "raid_bdev1", 00:19:18.333 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:18.333 "strip_size_kb": 64, 00:19:18.333 "state": "online", 00:19:18.333 "raid_level": "raid5f", 00:19:18.333 "superblock": true, 00:19:18.333 "num_base_bdevs": 3, 00:19:18.333 "num_base_bdevs_discovered": 2, 00:19:18.333 "num_base_bdevs_operational": 2, 00:19:18.333 "base_bdevs_list": [ 00:19:18.333 { 00:19:18.333 "name": null, 00:19:18.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.333 "is_configured": false, 00:19:18.333 "data_offset": 0, 00:19:18.333 "data_size": 63488 00:19:18.333 }, 00:19:18.333 { 00:19:18.333 "name": "BaseBdev2", 00:19:18.333 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:18.333 "is_configured": true, 00:19:18.333 "data_offset": 2048, 00:19:18.333 "data_size": 63488 00:19:18.333 }, 00:19:18.333 { 00:19:18.334 "name": "BaseBdev3", 00:19:18.334 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:18.334 "is_configured": true, 00:19:18.334 "data_offset": 2048, 00:19:18.334 "data_size": 63488 00:19:18.334 } 00:19:18.334 ] 00:19:18.334 }' 00:19:18.334 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.334 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.900 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:18.900 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.900 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.900 [2024-11-27 08:51:15.529299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.900 [2024-11-27 08:51:15.529588] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:18.901 [2024-11-27 08:51:15.529618] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:18.901 [2024-11-27 08:51:15.529673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.901 [2024-11-27 08:51:15.544699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:19:18.901 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.901 08:51:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:18.901 [2024-11-27 08:51:15.552000] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:19.834 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.834 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.834 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.834 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.834 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.834 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.834 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.834 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.834 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.834 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.092 "name": "raid_bdev1", 00:19:20.092 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:20.092 "strip_size_kb": 64, 00:19:20.092 "state": "online", 00:19:20.092 "raid_level": "raid5f", 00:19:20.092 "superblock": true, 00:19:20.092 "num_base_bdevs": 3, 00:19:20.092 "num_base_bdevs_discovered": 3, 00:19:20.092 "num_base_bdevs_operational": 3, 00:19:20.092 "process": { 00:19:20.092 "type": "rebuild", 00:19:20.092 "target": "spare", 00:19:20.092 "progress": { 00:19:20.092 "blocks": 18432, 00:19:20.092 "percent": 14 00:19:20.092 } 00:19:20.092 }, 00:19:20.092 "base_bdevs_list": [ 00:19:20.092 { 00:19:20.092 "name": "spare", 00:19:20.092 "uuid": "80a5cdc3-df0d-5933-8914-19d92c8002e6", 00:19:20.092 "is_configured": true, 00:19:20.092 "data_offset": 2048, 00:19:20.092 "data_size": 63488 00:19:20.092 }, 00:19:20.092 { 00:19:20.092 "name": "BaseBdev2", 00:19:20.092 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:20.092 "is_configured": true, 00:19:20.092 "data_offset": 2048, 00:19:20.092 "data_size": 63488 00:19:20.092 }, 00:19:20.092 { 00:19:20.092 "name": "BaseBdev3", 00:19:20.092 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:20.092 "is_configured": true, 00:19:20.092 "data_offset": 2048, 00:19:20.092 "data_size": 63488 00:19:20.092 } 00:19:20.092 ] 00:19:20.092 }' 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.092 [2024-11-27 08:51:16.713715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.092 [2024-11-27 08:51:16.765968] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:20.092 [2024-11-27 08:51:16.766056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.092 [2024-11-27 08:51:16.766083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.092 [2024-11-27 08:51:16.766099] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.092 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.350 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.350 "name": "raid_bdev1", 00:19:20.350 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:20.350 "strip_size_kb": 64, 00:19:20.350 "state": "online", 00:19:20.350 "raid_level": "raid5f", 00:19:20.350 "superblock": true, 00:19:20.350 "num_base_bdevs": 3, 00:19:20.350 "num_base_bdevs_discovered": 2, 00:19:20.350 "num_base_bdevs_operational": 2, 00:19:20.350 "base_bdevs_list": [ 00:19:20.350 { 00:19:20.350 "name": null, 00:19:20.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.350 "is_configured": false, 00:19:20.350 "data_offset": 0, 00:19:20.350 "data_size": 63488 00:19:20.350 }, 00:19:20.350 { 00:19:20.350 "name": "BaseBdev2", 00:19:20.350 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:20.350 "is_configured": true, 00:19:20.350 "data_offset": 2048, 00:19:20.350 "data_size": 63488 00:19:20.350 }, 00:19:20.350 { 00:19:20.350 "name": "BaseBdev3", 00:19:20.350 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:20.350 "is_configured": true, 00:19:20.350 "data_offset": 2048, 00:19:20.350 "data_size": 63488 00:19:20.350 } 00:19:20.350 ] 00:19:20.350 }' 00:19:20.350 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.350 08:51:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.608 08:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:20.608 08:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.608 08:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.608 [2024-11-27 08:51:17.334585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:20.608 [2024-11-27 08:51:17.334697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.608 [2024-11-27 08:51:17.334736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:19:20.608 [2024-11-27 08:51:17.334761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.608 [2024-11-27 08:51:17.335479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.608 [2024-11-27 08:51:17.335528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:20.608 [2024-11-27 08:51:17.335677] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:20.608 [2024-11-27 08:51:17.335706] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:20.608 [2024-11-27 08:51:17.335722] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:20.608 [2024-11-27 08:51:17.335760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:20.608 [2024-11-27 08:51:17.351166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:19:20.608 spare 00:19:20.608 08:51:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.608 08:51:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:20.608 [2024-11-27 08:51:17.358754] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:21.988 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.988 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.988 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.988 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.988 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.988 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.988 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.988 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.988 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.988 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.988 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.988 "name": "raid_bdev1", 00:19:21.988 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:21.988 "strip_size_kb": 64, 00:19:21.988 "state": "online", 00:19:21.988 "raid_level": "raid5f", 00:19:21.988 "superblock": true, 00:19:21.988 "num_base_bdevs": 3, 00:19:21.988 "num_base_bdevs_discovered": 3, 00:19:21.988 "num_base_bdevs_operational": 3, 00:19:21.988 "process": { 00:19:21.988 "type": "rebuild", 00:19:21.988 "target": "spare", 00:19:21.988 "progress": { 00:19:21.988 "blocks": 18432, 00:19:21.988 "percent": 14 00:19:21.988 } 00:19:21.988 }, 00:19:21.988 "base_bdevs_list": [ 00:19:21.988 { 00:19:21.988 "name": "spare", 00:19:21.988 "uuid": "80a5cdc3-df0d-5933-8914-19d92c8002e6", 00:19:21.988 "is_configured": true, 00:19:21.988 "data_offset": 2048, 00:19:21.988 "data_size": 63488 00:19:21.988 }, 00:19:21.988 { 00:19:21.988 "name": "BaseBdev2", 00:19:21.988 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:21.988 "is_configured": true, 00:19:21.988 "data_offset": 2048, 00:19:21.988 "data_size": 63488 00:19:21.988 }, 00:19:21.988 { 00:19:21.988 "name": "BaseBdev3", 00:19:21.988 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:21.988 "is_configured": true, 00:19:21.989 "data_offset": 2048, 00:19:21.989 "data_size": 63488 00:19:21.989 } 00:19:21.989 ] 00:19:21.989 }' 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.989 [2024-11-27 08:51:18.520480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.989 [2024-11-27 08:51:18.575196] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:21.989 [2024-11-27 08:51:18.575291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.989 [2024-11-27 08:51:18.575322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.989 [2024-11-27 08:51:18.575350] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.989 "name": "raid_bdev1", 00:19:21.989 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:21.989 "strip_size_kb": 64, 00:19:21.989 "state": "online", 00:19:21.989 "raid_level": "raid5f", 00:19:21.989 "superblock": true, 00:19:21.989 "num_base_bdevs": 3, 00:19:21.989 "num_base_bdevs_discovered": 2, 00:19:21.989 "num_base_bdevs_operational": 2, 00:19:21.989 "base_bdevs_list": [ 00:19:21.989 { 00:19:21.989 "name": null, 00:19:21.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.989 "is_configured": false, 00:19:21.989 "data_offset": 0, 00:19:21.989 "data_size": 63488 00:19:21.989 }, 00:19:21.989 { 00:19:21.989 "name": "BaseBdev2", 00:19:21.989 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:21.989 "is_configured": true, 00:19:21.989 "data_offset": 2048, 00:19:21.989 "data_size": 63488 00:19:21.989 }, 00:19:21.989 { 00:19:21.989 "name": "BaseBdev3", 00:19:21.989 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:21.989 "is_configured": true, 00:19:21.989 "data_offset": 2048, 00:19:21.989 "data_size": 63488 00:19:21.989 } 00:19:21.989 ] 00:19:21.989 }' 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.989 08:51:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.555 "name": "raid_bdev1", 00:19:22.555 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:22.555 "strip_size_kb": 64, 00:19:22.555 "state": "online", 00:19:22.555 "raid_level": "raid5f", 00:19:22.555 "superblock": true, 00:19:22.555 "num_base_bdevs": 3, 00:19:22.555 "num_base_bdevs_discovered": 2, 00:19:22.555 "num_base_bdevs_operational": 2, 00:19:22.555 "base_bdevs_list": [ 00:19:22.555 { 00:19:22.555 "name": null, 00:19:22.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.555 "is_configured": false, 00:19:22.555 "data_offset": 0, 00:19:22.555 "data_size": 63488 00:19:22.555 }, 00:19:22.555 { 00:19:22.555 "name": "BaseBdev2", 00:19:22.555 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:22.555 "is_configured": true, 00:19:22.555 "data_offset": 2048, 00:19:22.555 "data_size": 63488 00:19:22.555 }, 00:19:22.555 { 00:19:22.555 "name": "BaseBdev3", 00:19:22.555 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:22.555 "is_configured": true, 00:19:22.555 "data_offset": 2048, 00:19:22.555 "data_size": 63488 00:19:22.555 } 00:19:22.555 ] 00:19:22.555 }' 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.555 [2024-11-27 08:51:19.280428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:22.555 [2024-11-27 08:51:19.280503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.555 [2024-11-27 08:51:19.280546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:22.555 [2024-11-27 08:51:19.280563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.555 [2024-11-27 08:51:19.281215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.555 [2024-11-27 08:51:19.281258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:22.555 [2024-11-27 08:51:19.281399] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:22.555 [2024-11-27 08:51:19.281428] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:22.555 [2024-11-27 08:51:19.281457] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:22.555 [2024-11-27 08:51:19.281472] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:22.555 BaseBdev1 00:19:22.555 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.556 08:51:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.928 "name": "raid_bdev1", 00:19:23.928 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:23.928 "strip_size_kb": 64, 00:19:23.928 "state": "online", 00:19:23.928 "raid_level": "raid5f", 00:19:23.928 "superblock": true, 00:19:23.928 "num_base_bdevs": 3, 00:19:23.928 "num_base_bdevs_discovered": 2, 00:19:23.928 "num_base_bdevs_operational": 2, 00:19:23.928 "base_bdevs_list": [ 00:19:23.928 { 00:19:23.928 "name": null, 00:19:23.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.928 "is_configured": false, 00:19:23.928 "data_offset": 0, 00:19:23.928 "data_size": 63488 00:19:23.928 }, 00:19:23.928 { 00:19:23.928 "name": "BaseBdev2", 00:19:23.928 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:23.928 "is_configured": true, 00:19:23.928 "data_offset": 2048, 00:19:23.928 "data_size": 63488 00:19:23.928 }, 00:19:23.928 { 00:19:23.928 "name": "BaseBdev3", 00:19:23.928 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:23.928 "is_configured": true, 00:19:23.928 "data_offset": 2048, 00:19:23.928 "data_size": 63488 00:19:23.928 } 00:19:23.928 ] 00:19:23.928 }' 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.928 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.186 "name": "raid_bdev1", 00:19:24.186 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:24.186 "strip_size_kb": 64, 00:19:24.186 "state": "online", 00:19:24.186 "raid_level": "raid5f", 00:19:24.186 "superblock": true, 00:19:24.186 "num_base_bdevs": 3, 00:19:24.186 "num_base_bdevs_discovered": 2, 00:19:24.186 "num_base_bdevs_operational": 2, 00:19:24.186 "base_bdevs_list": [ 00:19:24.186 { 00:19:24.186 "name": null, 00:19:24.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.186 "is_configured": false, 00:19:24.186 "data_offset": 0, 00:19:24.186 "data_size": 63488 00:19:24.186 }, 00:19:24.186 { 00:19:24.186 "name": "BaseBdev2", 00:19:24.186 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:24.186 "is_configured": true, 00:19:24.186 "data_offset": 2048, 00:19:24.186 "data_size": 63488 00:19:24.186 }, 00:19:24.186 { 00:19:24.186 "name": "BaseBdev3", 00:19:24.186 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:24.186 "is_configured": true, 00:19:24.186 "data_offset": 2048, 00:19:24.186 "data_size": 63488 00:19:24.186 } 00:19:24.186 ] 00:19:24.186 }' 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.186 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.443 [2024-11-27 08:51:20.945141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.443 [2024-11-27 08:51:20.945421] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:24.443 [2024-11-27 08:51:20.945449] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:24.443 request: 00:19:24.443 { 00:19:24.443 "base_bdev": "BaseBdev1", 00:19:24.443 "raid_bdev": "raid_bdev1", 00:19:24.443 "method": "bdev_raid_add_base_bdev", 00:19:24.443 "req_id": 1 00:19:24.443 } 00:19:24.443 Got JSON-RPC error response 00:19:24.443 response: 00:19:24.443 { 00:19:24.443 "code": -22, 00:19:24.443 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:24.443 } 00:19:24.443 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:24.443 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:24.443 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.443 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.443 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.443 08:51:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:25.378 08:51:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:25.378 08:51:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.378 08:51:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.378 08:51:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:25.378 08:51:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:25.378 08:51:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.378 08:51:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.378 08:51:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.378 08:51:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.378 08:51:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.378 08:51:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.378 08:51:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.378 08:51:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.378 08:51:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.378 08:51:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.378 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.378 "name": "raid_bdev1", 00:19:25.378 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:25.378 "strip_size_kb": 64, 00:19:25.378 "state": "online", 00:19:25.378 "raid_level": "raid5f", 00:19:25.378 "superblock": true, 00:19:25.378 "num_base_bdevs": 3, 00:19:25.378 "num_base_bdevs_discovered": 2, 00:19:25.378 "num_base_bdevs_operational": 2, 00:19:25.378 "base_bdevs_list": [ 00:19:25.378 { 00:19:25.378 "name": null, 00:19:25.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.378 "is_configured": false, 00:19:25.378 "data_offset": 0, 00:19:25.378 "data_size": 63488 00:19:25.378 }, 00:19:25.378 { 00:19:25.378 "name": "BaseBdev2", 00:19:25.378 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:25.378 "is_configured": true, 00:19:25.378 "data_offset": 2048, 00:19:25.378 "data_size": 63488 00:19:25.378 }, 00:19:25.378 { 00:19:25.378 "name": "BaseBdev3", 00:19:25.378 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:25.378 "is_configured": true, 00:19:25.378 "data_offset": 2048, 00:19:25.378 "data_size": 63488 00:19:25.378 } 00:19:25.378 ] 00:19:25.378 }' 00:19:25.378 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.378 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.945 "name": "raid_bdev1", 00:19:25.945 "uuid": "29ab17ac-a5c1-4e7f-88e4-7ca3c486184a", 00:19:25.945 "strip_size_kb": 64, 00:19:25.945 "state": "online", 00:19:25.945 "raid_level": "raid5f", 00:19:25.945 "superblock": true, 00:19:25.945 "num_base_bdevs": 3, 00:19:25.945 "num_base_bdevs_discovered": 2, 00:19:25.945 "num_base_bdevs_operational": 2, 00:19:25.945 "base_bdevs_list": [ 00:19:25.945 { 00:19:25.945 "name": null, 00:19:25.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.945 "is_configured": false, 00:19:25.945 "data_offset": 0, 00:19:25.945 "data_size": 63488 00:19:25.945 }, 00:19:25.945 { 00:19:25.945 "name": "BaseBdev2", 00:19:25.945 "uuid": "9dc8429d-09ce-5051-8991-24f438effcbc", 00:19:25.945 "is_configured": true, 00:19:25.945 "data_offset": 2048, 00:19:25.945 "data_size": 63488 00:19:25.945 }, 00:19:25.945 { 00:19:25.945 "name": "BaseBdev3", 00:19:25.945 "uuid": "c592b925-ea5a-59e5-96bb-9dbc7475de44", 00:19:25.945 "is_configured": true, 00:19:25.945 "data_offset": 2048, 00:19:25.945 "data_size": 63488 00:19:25.945 } 00:19:25.945 ] 00:19:25.945 }' 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82529 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' -z 82529 ']' 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # kill -0 82529 00:19:25.945 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # uname 00:19:25.946 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:19:25.946 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 82529 00:19:25.946 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:19:25.946 killing process with pid 82529 00:19:25.946 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:19:25.946 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # echo 'killing process with pid 82529' 00:19:25.946 Received shutdown signal, test time was about 60.000000 seconds 00:19:25.946 00:19:25.946 Latency(us) 00:19:25.946 [2024-11-27T08:51:22.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.946 [2024-11-27T08:51:22.706Z] =================================================================================================================== 00:19:25.946 [2024-11-27T08:51:22.706Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.946 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # kill 82529 00:19:25.946 08:51:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@975 -- # wait 82529 00:19:25.946 [2024-11-27 08:51:22.682063] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.946 [2024-11-27 08:51:22.682243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.946 [2024-11-27 08:51:22.682387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.946 [2024-11-27 08:51:22.682432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:26.561 [2024-11-27 08:51:23.071868] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.499 08:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:27.499 00:19:27.499 real 0m24.888s 00:19:27.499 user 0m32.972s 00:19:27.499 sys 0m2.678s 00:19:27.499 08:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # xtrace_disable 00:19:27.499 08:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.499 ************************************ 00:19:27.499 END TEST raid5f_rebuild_test_sb 00:19:27.499 ************************************ 00:19:27.499 08:51:24 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:19:27.499 08:51:24 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:19:27.499 08:51:24 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:19:27.499 08:51:24 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:19:27.499 08:51:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.499 ************************************ 00:19:27.499 START TEST raid5f_state_function_test 00:19:27.499 ************************************ 00:19:27.499 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # raid_state_function_test raid5f 4 false 00:19:27.499 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:27.499 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:27.499 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:27.499 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:27.499 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:27.499 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:27.500 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:27.759 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83294 00:19:27.759 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:27.759 Process raid pid: 83294 00:19:27.759 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83294' 00:19:27.759 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83294 00:19:27.759 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@832 -- # '[' -z 83294 ']' 00:19:27.759 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.759 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:19:27.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.759 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.759 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:19:27.759 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.759 [2024-11-27 08:51:24.375362] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:19:27.759 [2024-11-27 08:51:24.375551] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.019 [2024-11-27 08:51:24.561307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.019 [2024-11-27 08:51:24.736059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.278 [2024-11-27 08:51:24.971446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.278 [2024-11-27 08:51:24.971509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@865 -- # return 0 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.845 [2024-11-27 08:51:25.383976] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.845 [2024-11-27 08:51:25.384048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.845 [2024-11-27 08:51:25.384067] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.845 [2024-11-27 08:51:25.384084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.845 [2024-11-27 08:51:25.384094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:28.845 [2024-11-27 08:51:25.384109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:28.845 [2024-11-27 08:51:25.384119] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:28.845 [2024-11-27 08:51:25.384133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.845 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.846 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.846 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.846 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.846 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.846 "name": "Existed_Raid", 00:19:28.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.846 "strip_size_kb": 64, 00:19:28.846 "state": "configuring", 00:19:28.846 "raid_level": "raid5f", 00:19:28.846 "superblock": false, 00:19:28.846 "num_base_bdevs": 4, 00:19:28.846 "num_base_bdevs_discovered": 0, 00:19:28.846 "num_base_bdevs_operational": 4, 00:19:28.846 "base_bdevs_list": [ 00:19:28.846 { 00:19:28.846 "name": "BaseBdev1", 00:19:28.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.846 "is_configured": false, 00:19:28.846 "data_offset": 0, 00:19:28.846 "data_size": 0 00:19:28.846 }, 00:19:28.846 { 00:19:28.846 "name": "BaseBdev2", 00:19:28.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.846 "is_configured": false, 00:19:28.846 "data_offset": 0, 00:19:28.846 "data_size": 0 00:19:28.846 }, 00:19:28.846 { 00:19:28.846 "name": "BaseBdev3", 00:19:28.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.846 "is_configured": false, 00:19:28.846 "data_offset": 0, 00:19:28.846 "data_size": 0 00:19:28.846 }, 00:19:28.846 { 00:19:28.846 "name": "BaseBdev4", 00:19:28.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.846 "is_configured": false, 00:19:28.846 "data_offset": 0, 00:19:28.846 "data_size": 0 00:19:28.846 } 00:19:28.846 ] 00:19:28.846 }' 00:19:28.846 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.846 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.412 [2024-11-27 08:51:25.868070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:29.412 [2024-11-27 08:51:25.868124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.412 [2024-11-27 08:51:25.876010] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:29.412 [2024-11-27 08:51:25.876067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:29.412 [2024-11-27 08:51:25.876084] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.412 [2024-11-27 08:51:25.876100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.412 [2024-11-27 08:51:25.876110] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:29.412 [2024-11-27 08:51:25.876125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:29.412 [2024-11-27 08:51:25.876135] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:29.412 [2024-11-27 08:51:25.876149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.412 [2024-11-27 08:51:25.925415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.412 BaseBdev1 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.412 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.412 [ 00:19:29.412 { 00:19:29.412 "name": "BaseBdev1", 00:19:29.412 "aliases": [ 00:19:29.412 "11525304-ec81-4399-8b18-4dba3fe1dfb4" 00:19:29.412 ], 00:19:29.412 "product_name": "Malloc disk", 00:19:29.412 "block_size": 512, 00:19:29.412 "num_blocks": 65536, 00:19:29.412 "uuid": "11525304-ec81-4399-8b18-4dba3fe1dfb4", 00:19:29.412 "assigned_rate_limits": { 00:19:29.412 "rw_ios_per_sec": 0, 00:19:29.412 "rw_mbytes_per_sec": 0, 00:19:29.412 "r_mbytes_per_sec": 0, 00:19:29.412 "w_mbytes_per_sec": 0 00:19:29.412 }, 00:19:29.412 "claimed": true, 00:19:29.412 "claim_type": "exclusive_write", 00:19:29.412 "zoned": false, 00:19:29.412 "supported_io_types": { 00:19:29.412 "read": true, 00:19:29.412 "write": true, 00:19:29.412 "unmap": true, 00:19:29.412 "flush": true, 00:19:29.412 "reset": true, 00:19:29.412 "nvme_admin": false, 00:19:29.412 "nvme_io": false, 00:19:29.412 "nvme_io_md": false, 00:19:29.412 "write_zeroes": true, 00:19:29.412 "zcopy": true, 00:19:29.412 "get_zone_info": false, 00:19:29.412 "zone_management": false, 00:19:29.412 "zone_append": false, 00:19:29.412 "compare": false, 00:19:29.412 "compare_and_write": false, 00:19:29.413 "abort": true, 00:19:29.413 "seek_hole": false, 00:19:29.413 "seek_data": false, 00:19:29.413 "copy": true, 00:19:29.413 "nvme_iov_md": false 00:19:29.413 }, 00:19:29.413 "memory_domains": [ 00:19:29.413 { 00:19:29.413 "dma_device_id": "system", 00:19:29.413 "dma_device_type": 1 00:19:29.413 }, 00:19:29.413 { 00:19:29.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.413 "dma_device_type": 2 00:19:29.413 } 00:19:29.413 ], 00:19:29.413 "driver_specific": {} 00:19:29.413 } 00:19:29.413 ] 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.413 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.413 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.413 "name": "Existed_Raid", 00:19:29.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.413 "strip_size_kb": 64, 00:19:29.413 "state": "configuring", 00:19:29.413 "raid_level": "raid5f", 00:19:29.413 "superblock": false, 00:19:29.413 "num_base_bdevs": 4, 00:19:29.413 "num_base_bdevs_discovered": 1, 00:19:29.413 "num_base_bdevs_operational": 4, 00:19:29.413 "base_bdevs_list": [ 00:19:29.413 { 00:19:29.413 "name": "BaseBdev1", 00:19:29.413 "uuid": "11525304-ec81-4399-8b18-4dba3fe1dfb4", 00:19:29.413 "is_configured": true, 00:19:29.413 "data_offset": 0, 00:19:29.413 "data_size": 65536 00:19:29.413 }, 00:19:29.413 { 00:19:29.413 "name": "BaseBdev2", 00:19:29.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.413 "is_configured": false, 00:19:29.413 "data_offset": 0, 00:19:29.413 "data_size": 0 00:19:29.413 }, 00:19:29.413 { 00:19:29.413 "name": "BaseBdev3", 00:19:29.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.413 "is_configured": false, 00:19:29.413 "data_offset": 0, 00:19:29.413 "data_size": 0 00:19:29.413 }, 00:19:29.413 { 00:19:29.413 "name": "BaseBdev4", 00:19:29.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.413 "is_configured": false, 00:19:29.413 "data_offset": 0, 00:19:29.413 "data_size": 0 00:19:29.413 } 00:19:29.413 ] 00:19:29.413 }' 00:19:29.413 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.413 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.980 [2024-11-27 08:51:26.465633] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:29.980 [2024-11-27 08:51:26.465708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.980 [2024-11-27 08:51:26.473664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.980 [2024-11-27 08:51:26.476267] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.980 [2024-11-27 08:51:26.476327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.980 [2024-11-27 08:51:26.476359] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:29.980 [2024-11-27 08:51:26.476379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:29.980 [2024-11-27 08:51:26.476389] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:29.980 [2024-11-27 08:51:26.476404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.980 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.980 "name": "Existed_Raid", 00:19:29.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.980 "strip_size_kb": 64, 00:19:29.980 "state": "configuring", 00:19:29.980 "raid_level": "raid5f", 00:19:29.980 "superblock": false, 00:19:29.980 "num_base_bdevs": 4, 00:19:29.980 "num_base_bdevs_discovered": 1, 00:19:29.980 "num_base_bdevs_operational": 4, 00:19:29.980 "base_bdevs_list": [ 00:19:29.980 { 00:19:29.980 "name": "BaseBdev1", 00:19:29.980 "uuid": "11525304-ec81-4399-8b18-4dba3fe1dfb4", 00:19:29.980 "is_configured": true, 00:19:29.980 "data_offset": 0, 00:19:29.980 "data_size": 65536 00:19:29.980 }, 00:19:29.980 { 00:19:29.980 "name": "BaseBdev2", 00:19:29.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.981 "is_configured": false, 00:19:29.981 "data_offset": 0, 00:19:29.981 "data_size": 0 00:19:29.981 }, 00:19:29.981 { 00:19:29.981 "name": "BaseBdev3", 00:19:29.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.981 "is_configured": false, 00:19:29.981 "data_offset": 0, 00:19:29.981 "data_size": 0 00:19:29.981 }, 00:19:29.981 { 00:19:29.981 "name": "BaseBdev4", 00:19:29.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.981 "is_configured": false, 00:19:29.981 "data_offset": 0, 00:19:29.981 "data_size": 0 00:19:29.981 } 00:19:29.981 ] 00:19:29.981 }' 00:19:29.981 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.981 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.276 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:30.276 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.276 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.536 [2024-11-27 08:51:27.035563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:30.536 BaseBdev2 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.536 [ 00:19:30.536 { 00:19:30.536 "name": "BaseBdev2", 00:19:30.536 "aliases": [ 00:19:30.536 "e7e8c3b2-2a43-45c8-b108-eb7171a6153d" 00:19:30.536 ], 00:19:30.536 "product_name": "Malloc disk", 00:19:30.536 "block_size": 512, 00:19:30.536 "num_blocks": 65536, 00:19:30.536 "uuid": "e7e8c3b2-2a43-45c8-b108-eb7171a6153d", 00:19:30.536 "assigned_rate_limits": { 00:19:30.536 "rw_ios_per_sec": 0, 00:19:30.536 "rw_mbytes_per_sec": 0, 00:19:30.536 "r_mbytes_per_sec": 0, 00:19:30.536 "w_mbytes_per_sec": 0 00:19:30.536 }, 00:19:30.536 "claimed": true, 00:19:30.536 "claim_type": "exclusive_write", 00:19:30.536 "zoned": false, 00:19:30.536 "supported_io_types": { 00:19:30.536 "read": true, 00:19:30.536 "write": true, 00:19:30.536 "unmap": true, 00:19:30.536 "flush": true, 00:19:30.536 "reset": true, 00:19:30.536 "nvme_admin": false, 00:19:30.536 "nvme_io": false, 00:19:30.536 "nvme_io_md": false, 00:19:30.536 "write_zeroes": true, 00:19:30.536 "zcopy": true, 00:19:30.536 "get_zone_info": false, 00:19:30.536 "zone_management": false, 00:19:30.536 "zone_append": false, 00:19:30.536 "compare": false, 00:19:30.536 "compare_and_write": false, 00:19:30.536 "abort": true, 00:19:30.536 "seek_hole": false, 00:19:30.536 "seek_data": false, 00:19:30.536 "copy": true, 00:19:30.536 "nvme_iov_md": false 00:19:30.536 }, 00:19:30.536 "memory_domains": [ 00:19:30.536 { 00:19:30.536 "dma_device_id": "system", 00:19:30.536 "dma_device_type": 1 00:19:30.536 }, 00:19:30.536 { 00:19:30.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.536 "dma_device_type": 2 00:19:30.536 } 00:19:30.536 ], 00:19:30.536 "driver_specific": {} 00:19:30.536 } 00:19:30.536 ] 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.536 "name": "Existed_Raid", 00:19:30.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.536 "strip_size_kb": 64, 00:19:30.536 "state": "configuring", 00:19:30.536 "raid_level": "raid5f", 00:19:30.536 "superblock": false, 00:19:30.536 "num_base_bdevs": 4, 00:19:30.536 "num_base_bdevs_discovered": 2, 00:19:30.536 "num_base_bdevs_operational": 4, 00:19:30.536 "base_bdevs_list": [ 00:19:30.536 { 00:19:30.536 "name": "BaseBdev1", 00:19:30.536 "uuid": "11525304-ec81-4399-8b18-4dba3fe1dfb4", 00:19:30.536 "is_configured": true, 00:19:30.536 "data_offset": 0, 00:19:30.536 "data_size": 65536 00:19:30.536 }, 00:19:30.536 { 00:19:30.536 "name": "BaseBdev2", 00:19:30.536 "uuid": "e7e8c3b2-2a43-45c8-b108-eb7171a6153d", 00:19:30.536 "is_configured": true, 00:19:30.536 "data_offset": 0, 00:19:30.536 "data_size": 65536 00:19:30.536 }, 00:19:30.536 { 00:19:30.536 "name": "BaseBdev3", 00:19:30.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.536 "is_configured": false, 00:19:30.536 "data_offset": 0, 00:19:30.536 "data_size": 0 00:19:30.536 }, 00:19:30.536 { 00:19:30.536 "name": "BaseBdev4", 00:19:30.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.536 "is_configured": false, 00:19:30.536 "data_offset": 0, 00:19:30.536 "data_size": 0 00:19:30.536 } 00:19:30.536 ] 00:19:30.536 }' 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.536 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.103 [2024-11-27 08:51:27.627569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:31.103 BaseBdev3 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.103 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.103 [ 00:19:31.103 { 00:19:31.103 "name": "BaseBdev3", 00:19:31.103 "aliases": [ 00:19:31.103 "203d4d54-bb4b-4251-8458-03f6f6d6c0ea" 00:19:31.103 ], 00:19:31.103 "product_name": "Malloc disk", 00:19:31.103 "block_size": 512, 00:19:31.103 "num_blocks": 65536, 00:19:31.103 "uuid": "203d4d54-bb4b-4251-8458-03f6f6d6c0ea", 00:19:31.103 "assigned_rate_limits": { 00:19:31.103 "rw_ios_per_sec": 0, 00:19:31.103 "rw_mbytes_per_sec": 0, 00:19:31.103 "r_mbytes_per_sec": 0, 00:19:31.103 "w_mbytes_per_sec": 0 00:19:31.103 }, 00:19:31.103 "claimed": true, 00:19:31.103 "claim_type": "exclusive_write", 00:19:31.103 "zoned": false, 00:19:31.103 "supported_io_types": { 00:19:31.103 "read": true, 00:19:31.103 "write": true, 00:19:31.103 "unmap": true, 00:19:31.103 "flush": true, 00:19:31.103 "reset": true, 00:19:31.103 "nvme_admin": false, 00:19:31.103 "nvme_io": false, 00:19:31.103 "nvme_io_md": false, 00:19:31.103 "write_zeroes": true, 00:19:31.103 "zcopy": true, 00:19:31.103 "get_zone_info": false, 00:19:31.103 "zone_management": false, 00:19:31.103 "zone_append": false, 00:19:31.103 "compare": false, 00:19:31.103 "compare_and_write": false, 00:19:31.103 "abort": true, 00:19:31.103 "seek_hole": false, 00:19:31.103 "seek_data": false, 00:19:31.103 "copy": true, 00:19:31.104 "nvme_iov_md": false 00:19:31.104 }, 00:19:31.104 "memory_domains": [ 00:19:31.104 { 00:19:31.104 "dma_device_id": "system", 00:19:31.104 "dma_device_type": 1 00:19:31.104 }, 00:19:31.104 { 00:19:31.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.104 "dma_device_type": 2 00:19:31.104 } 00:19:31.104 ], 00:19:31.104 "driver_specific": {} 00:19:31.104 } 00:19:31.104 ] 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.104 "name": "Existed_Raid", 00:19:31.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.104 "strip_size_kb": 64, 00:19:31.104 "state": "configuring", 00:19:31.104 "raid_level": "raid5f", 00:19:31.104 "superblock": false, 00:19:31.104 "num_base_bdevs": 4, 00:19:31.104 "num_base_bdevs_discovered": 3, 00:19:31.104 "num_base_bdevs_operational": 4, 00:19:31.104 "base_bdevs_list": [ 00:19:31.104 { 00:19:31.104 "name": "BaseBdev1", 00:19:31.104 "uuid": "11525304-ec81-4399-8b18-4dba3fe1dfb4", 00:19:31.104 "is_configured": true, 00:19:31.104 "data_offset": 0, 00:19:31.104 "data_size": 65536 00:19:31.104 }, 00:19:31.104 { 00:19:31.104 "name": "BaseBdev2", 00:19:31.104 "uuid": "e7e8c3b2-2a43-45c8-b108-eb7171a6153d", 00:19:31.104 "is_configured": true, 00:19:31.104 "data_offset": 0, 00:19:31.104 "data_size": 65536 00:19:31.104 }, 00:19:31.104 { 00:19:31.104 "name": "BaseBdev3", 00:19:31.104 "uuid": "203d4d54-bb4b-4251-8458-03f6f6d6c0ea", 00:19:31.104 "is_configured": true, 00:19:31.104 "data_offset": 0, 00:19:31.104 "data_size": 65536 00:19:31.104 }, 00:19:31.104 { 00:19:31.104 "name": "BaseBdev4", 00:19:31.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.104 "is_configured": false, 00:19:31.104 "data_offset": 0, 00:19:31.104 "data_size": 0 00:19:31.104 } 00:19:31.104 ] 00:19:31.104 }' 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.104 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.672 [2024-11-27 08:51:28.199150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:31.672 [2024-11-27 08:51:28.199233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:31.672 [2024-11-27 08:51:28.199249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:31.672 [2024-11-27 08:51:28.199598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:31.672 [2024-11-27 08:51:28.208104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:31.672 [2024-11-27 08:51:28.208152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:31.672 BaseBdev4 00:19:31.672 [2024-11-27 08:51:28.208690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.672 [ 00:19:31.672 { 00:19:31.672 "name": "BaseBdev4", 00:19:31.672 "aliases": [ 00:19:31.672 "795b278e-6f50-493a-af79-0627a80029ea" 00:19:31.672 ], 00:19:31.672 "product_name": "Malloc disk", 00:19:31.672 "block_size": 512, 00:19:31.672 "num_blocks": 65536, 00:19:31.672 "uuid": "795b278e-6f50-493a-af79-0627a80029ea", 00:19:31.672 "assigned_rate_limits": { 00:19:31.672 "rw_ios_per_sec": 0, 00:19:31.672 "rw_mbytes_per_sec": 0, 00:19:31.672 "r_mbytes_per_sec": 0, 00:19:31.672 "w_mbytes_per_sec": 0 00:19:31.672 }, 00:19:31.672 "claimed": true, 00:19:31.672 "claim_type": "exclusive_write", 00:19:31.672 "zoned": false, 00:19:31.672 "supported_io_types": { 00:19:31.672 "read": true, 00:19:31.672 "write": true, 00:19:31.672 "unmap": true, 00:19:31.672 "flush": true, 00:19:31.672 "reset": true, 00:19:31.672 "nvme_admin": false, 00:19:31.672 "nvme_io": false, 00:19:31.672 "nvme_io_md": false, 00:19:31.672 "write_zeroes": true, 00:19:31.672 "zcopy": true, 00:19:31.672 "get_zone_info": false, 00:19:31.672 "zone_management": false, 00:19:31.672 "zone_append": false, 00:19:31.672 "compare": false, 00:19:31.672 "compare_and_write": false, 00:19:31.672 "abort": true, 00:19:31.672 "seek_hole": false, 00:19:31.672 "seek_data": false, 00:19:31.672 "copy": true, 00:19:31.672 "nvme_iov_md": false 00:19:31.672 }, 00:19:31.672 "memory_domains": [ 00:19:31.672 { 00:19:31.672 "dma_device_id": "system", 00:19:31.672 "dma_device_type": 1 00:19:31.672 }, 00:19:31.672 { 00:19:31.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.672 "dma_device_type": 2 00:19:31.672 } 00:19:31.672 ], 00:19:31.672 "driver_specific": {} 00:19:31.672 } 00:19:31.672 ] 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.672 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.672 "name": "Existed_Raid", 00:19:31.672 "uuid": "4a22890d-8cc3-4983-8a80-4b0fef47104e", 00:19:31.672 "strip_size_kb": 64, 00:19:31.672 "state": "online", 00:19:31.672 "raid_level": "raid5f", 00:19:31.672 "superblock": false, 00:19:31.672 "num_base_bdevs": 4, 00:19:31.673 "num_base_bdevs_discovered": 4, 00:19:31.673 "num_base_bdevs_operational": 4, 00:19:31.673 "base_bdevs_list": [ 00:19:31.673 { 00:19:31.673 "name": "BaseBdev1", 00:19:31.673 "uuid": "11525304-ec81-4399-8b18-4dba3fe1dfb4", 00:19:31.673 "is_configured": true, 00:19:31.673 "data_offset": 0, 00:19:31.673 "data_size": 65536 00:19:31.673 }, 00:19:31.673 { 00:19:31.673 "name": "BaseBdev2", 00:19:31.673 "uuid": "e7e8c3b2-2a43-45c8-b108-eb7171a6153d", 00:19:31.673 "is_configured": true, 00:19:31.673 "data_offset": 0, 00:19:31.673 "data_size": 65536 00:19:31.673 }, 00:19:31.673 { 00:19:31.673 "name": "BaseBdev3", 00:19:31.673 "uuid": "203d4d54-bb4b-4251-8458-03f6f6d6c0ea", 00:19:31.673 "is_configured": true, 00:19:31.673 "data_offset": 0, 00:19:31.673 "data_size": 65536 00:19:31.673 }, 00:19:31.673 { 00:19:31.673 "name": "BaseBdev4", 00:19:31.673 "uuid": "795b278e-6f50-493a-af79-0627a80029ea", 00:19:31.673 "is_configured": true, 00:19:31.673 "data_offset": 0, 00:19:31.673 "data_size": 65536 00:19:31.673 } 00:19:31.673 ] 00:19:31.673 }' 00:19:31.673 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.673 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:32.240 [2024-11-27 08:51:28.776796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:32.240 "name": "Existed_Raid", 00:19:32.240 "aliases": [ 00:19:32.240 "4a22890d-8cc3-4983-8a80-4b0fef47104e" 00:19:32.240 ], 00:19:32.240 "product_name": "Raid Volume", 00:19:32.240 "block_size": 512, 00:19:32.240 "num_blocks": 196608, 00:19:32.240 "uuid": "4a22890d-8cc3-4983-8a80-4b0fef47104e", 00:19:32.240 "assigned_rate_limits": { 00:19:32.240 "rw_ios_per_sec": 0, 00:19:32.240 "rw_mbytes_per_sec": 0, 00:19:32.240 "r_mbytes_per_sec": 0, 00:19:32.240 "w_mbytes_per_sec": 0 00:19:32.240 }, 00:19:32.240 "claimed": false, 00:19:32.240 "zoned": false, 00:19:32.240 "supported_io_types": { 00:19:32.240 "read": true, 00:19:32.240 "write": true, 00:19:32.240 "unmap": false, 00:19:32.240 "flush": false, 00:19:32.240 "reset": true, 00:19:32.240 "nvme_admin": false, 00:19:32.240 "nvme_io": false, 00:19:32.240 "nvme_io_md": false, 00:19:32.240 "write_zeroes": true, 00:19:32.240 "zcopy": false, 00:19:32.240 "get_zone_info": false, 00:19:32.240 "zone_management": false, 00:19:32.240 "zone_append": false, 00:19:32.240 "compare": false, 00:19:32.240 "compare_and_write": false, 00:19:32.240 "abort": false, 00:19:32.240 "seek_hole": false, 00:19:32.240 "seek_data": false, 00:19:32.240 "copy": false, 00:19:32.240 "nvme_iov_md": false 00:19:32.240 }, 00:19:32.240 "driver_specific": { 00:19:32.240 "raid": { 00:19:32.240 "uuid": "4a22890d-8cc3-4983-8a80-4b0fef47104e", 00:19:32.240 "strip_size_kb": 64, 00:19:32.240 "state": "online", 00:19:32.240 "raid_level": "raid5f", 00:19:32.240 "superblock": false, 00:19:32.240 "num_base_bdevs": 4, 00:19:32.240 "num_base_bdevs_discovered": 4, 00:19:32.240 "num_base_bdevs_operational": 4, 00:19:32.240 "base_bdevs_list": [ 00:19:32.240 { 00:19:32.240 "name": "BaseBdev1", 00:19:32.240 "uuid": "11525304-ec81-4399-8b18-4dba3fe1dfb4", 00:19:32.240 "is_configured": true, 00:19:32.240 "data_offset": 0, 00:19:32.240 "data_size": 65536 00:19:32.240 }, 00:19:32.240 { 00:19:32.240 "name": "BaseBdev2", 00:19:32.240 "uuid": "e7e8c3b2-2a43-45c8-b108-eb7171a6153d", 00:19:32.240 "is_configured": true, 00:19:32.240 "data_offset": 0, 00:19:32.240 "data_size": 65536 00:19:32.240 }, 00:19:32.240 { 00:19:32.240 "name": "BaseBdev3", 00:19:32.240 "uuid": "203d4d54-bb4b-4251-8458-03f6f6d6c0ea", 00:19:32.240 "is_configured": true, 00:19:32.240 "data_offset": 0, 00:19:32.240 "data_size": 65536 00:19:32.240 }, 00:19:32.240 { 00:19:32.240 "name": "BaseBdev4", 00:19:32.240 "uuid": "795b278e-6f50-493a-af79-0627a80029ea", 00:19:32.240 "is_configured": true, 00:19:32.240 "data_offset": 0, 00:19:32.240 "data_size": 65536 00:19:32.240 } 00:19:32.240 ] 00:19:32.240 } 00:19:32.240 } 00:19:32.240 }' 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:32.240 BaseBdev2 00:19:32.240 BaseBdev3 00:19:32.240 BaseBdev4' 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.240 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.499 [2024-11-27 08:51:29.140795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.499 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.500 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.500 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.500 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.757 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.757 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.757 "name": "Existed_Raid", 00:19:32.757 "uuid": "4a22890d-8cc3-4983-8a80-4b0fef47104e", 00:19:32.757 "strip_size_kb": 64, 00:19:32.757 "state": "online", 00:19:32.757 "raid_level": "raid5f", 00:19:32.757 "superblock": false, 00:19:32.757 "num_base_bdevs": 4, 00:19:32.757 "num_base_bdevs_discovered": 3, 00:19:32.757 "num_base_bdevs_operational": 3, 00:19:32.757 "base_bdevs_list": [ 00:19:32.757 { 00:19:32.757 "name": null, 00:19:32.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.757 "is_configured": false, 00:19:32.757 "data_offset": 0, 00:19:32.757 "data_size": 65536 00:19:32.757 }, 00:19:32.757 { 00:19:32.757 "name": "BaseBdev2", 00:19:32.757 "uuid": "e7e8c3b2-2a43-45c8-b108-eb7171a6153d", 00:19:32.757 "is_configured": true, 00:19:32.757 "data_offset": 0, 00:19:32.757 "data_size": 65536 00:19:32.757 }, 00:19:32.757 { 00:19:32.757 "name": "BaseBdev3", 00:19:32.757 "uuid": "203d4d54-bb4b-4251-8458-03f6f6d6c0ea", 00:19:32.757 "is_configured": true, 00:19:32.757 "data_offset": 0, 00:19:32.757 "data_size": 65536 00:19:32.757 }, 00:19:32.757 { 00:19:32.757 "name": "BaseBdev4", 00:19:32.757 "uuid": "795b278e-6f50-493a-af79-0627a80029ea", 00:19:32.757 "is_configured": true, 00:19:32.757 "data_offset": 0, 00:19:32.757 "data_size": 65536 00:19:32.757 } 00:19:32.757 ] 00:19:32.757 }' 00:19:32.758 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.758 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.015 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:33.015 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.016 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.016 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:33.016 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.016 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.016 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.274 [2024-11-27 08:51:29.796657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:33.274 [2024-11-27 08:51:29.796810] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.274 [2024-11-27 08:51:29.890121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.274 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.274 [2024-11-27 08:51:29.954130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.532 [2024-11-27 08:51:30.108706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:33.532 [2024-11-27 08:51:30.108795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.532 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.789 BaseBdev2 00:19:33.789 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.789 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:33.789 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:19:33.789 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:33.789 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:19:33.789 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:33.789 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:33.789 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:33.789 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.789 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.789 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.789 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.790 [ 00:19:33.790 { 00:19:33.790 "name": "BaseBdev2", 00:19:33.790 "aliases": [ 00:19:33.790 "9edcd2e0-80f7-493f-9201-b45bd14ecbdd" 00:19:33.790 ], 00:19:33.790 "product_name": "Malloc disk", 00:19:33.790 "block_size": 512, 00:19:33.790 "num_blocks": 65536, 00:19:33.790 "uuid": "9edcd2e0-80f7-493f-9201-b45bd14ecbdd", 00:19:33.790 "assigned_rate_limits": { 00:19:33.790 "rw_ios_per_sec": 0, 00:19:33.790 "rw_mbytes_per_sec": 0, 00:19:33.790 "r_mbytes_per_sec": 0, 00:19:33.790 "w_mbytes_per_sec": 0 00:19:33.790 }, 00:19:33.790 "claimed": false, 00:19:33.790 "zoned": false, 00:19:33.790 "supported_io_types": { 00:19:33.790 "read": true, 00:19:33.790 "write": true, 00:19:33.790 "unmap": true, 00:19:33.790 "flush": true, 00:19:33.790 "reset": true, 00:19:33.790 "nvme_admin": false, 00:19:33.790 "nvme_io": false, 00:19:33.790 "nvme_io_md": false, 00:19:33.790 "write_zeroes": true, 00:19:33.790 "zcopy": true, 00:19:33.790 "get_zone_info": false, 00:19:33.790 "zone_management": false, 00:19:33.790 "zone_append": false, 00:19:33.790 "compare": false, 00:19:33.790 "compare_and_write": false, 00:19:33.790 "abort": true, 00:19:33.790 "seek_hole": false, 00:19:33.790 "seek_data": false, 00:19:33.790 "copy": true, 00:19:33.790 "nvme_iov_md": false 00:19:33.790 }, 00:19:33.790 "memory_domains": [ 00:19:33.790 { 00:19:33.790 "dma_device_id": "system", 00:19:33.790 "dma_device_type": 1 00:19:33.790 }, 00:19:33.790 { 00:19:33.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.790 "dma_device_type": 2 00:19:33.790 } 00:19:33.790 ], 00:19:33.790 "driver_specific": {} 00:19:33.790 } 00:19:33.790 ] 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.790 BaseBdev3 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.790 [ 00:19:33.790 { 00:19:33.790 "name": "BaseBdev3", 00:19:33.790 "aliases": [ 00:19:33.790 "0670f07b-1526-4f2b-b2ed-27a3ffb0e73d" 00:19:33.790 ], 00:19:33.790 "product_name": "Malloc disk", 00:19:33.790 "block_size": 512, 00:19:33.790 "num_blocks": 65536, 00:19:33.790 "uuid": "0670f07b-1526-4f2b-b2ed-27a3ffb0e73d", 00:19:33.790 "assigned_rate_limits": { 00:19:33.790 "rw_ios_per_sec": 0, 00:19:33.790 "rw_mbytes_per_sec": 0, 00:19:33.790 "r_mbytes_per_sec": 0, 00:19:33.790 "w_mbytes_per_sec": 0 00:19:33.790 }, 00:19:33.790 "claimed": false, 00:19:33.790 "zoned": false, 00:19:33.790 "supported_io_types": { 00:19:33.790 "read": true, 00:19:33.790 "write": true, 00:19:33.790 "unmap": true, 00:19:33.790 "flush": true, 00:19:33.790 "reset": true, 00:19:33.790 "nvme_admin": false, 00:19:33.790 "nvme_io": false, 00:19:33.790 "nvme_io_md": false, 00:19:33.790 "write_zeroes": true, 00:19:33.790 "zcopy": true, 00:19:33.790 "get_zone_info": false, 00:19:33.790 "zone_management": false, 00:19:33.790 "zone_append": false, 00:19:33.790 "compare": false, 00:19:33.790 "compare_and_write": false, 00:19:33.790 "abort": true, 00:19:33.790 "seek_hole": false, 00:19:33.790 "seek_data": false, 00:19:33.790 "copy": true, 00:19:33.790 "nvme_iov_md": false 00:19:33.790 }, 00:19:33.790 "memory_domains": [ 00:19:33.790 { 00:19:33.790 "dma_device_id": "system", 00:19:33.790 "dma_device_type": 1 00:19:33.790 }, 00:19:33.790 { 00:19:33.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.790 "dma_device_type": 2 00:19:33.790 } 00:19:33.790 ], 00:19:33.790 "driver_specific": {} 00:19:33.790 } 00:19:33.790 ] 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.790 BaseBdev4 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:33.790 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.791 [ 00:19:33.791 { 00:19:33.791 "name": "BaseBdev4", 00:19:33.791 "aliases": [ 00:19:33.791 "ddf8a813-c90b-4095-9b1a-279a320cd3e9" 00:19:33.791 ], 00:19:33.791 "product_name": "Malloc disk", 00:19:33.791 "block_size": 512, 00:19:33.791 "num_blocks": 65536, 00:19:33.791 "uuid": "ddf8a813-c90b-4095-9b1a-279a320cd3e9", 00:19:33.791 "assigned_rate_limits": { 00:19:33.791 "rw_ios_per_sec": 0, 00:19:33.791 "rw_mbytes_per_sec": 0, 00:19:33.791 "r_mbytes_per_sec": 0, 00:19:33.791 "w_mbytes_per_sec": 0 00:19:33.791 }, 00:19:33.791 "claimed": false, 00:19:33.791 "zoned": false, 00:19:33.791 "supported_io_types": { 00:19:33.791 "read": true, 00:19:33.791 "write": true, 00:19:33.791 "unmap": true, 00:19:33.791 "flush": true, 00:19:33.791 "reset": true, 00:19:33.791 "nvme_admin": false, 00:19:33.791 "nvme_io": false, 00:19:33.791 "nvme_io_md": false, 00:19:33.791 "write_zeroes": true, 00:19:33.791 "zcopy": true, 00:19:33.791 "get_zone_info": false, 00:19:33.791 "zone_management": false, 00:19:33.791 "zone_append": false, 00:19:33.791 "compare": false, 00:19:33.791 "compare_and_write": false, 00:19:33.791 "abort": true, 00:19:33.791 "seek_hole": false, 00:19:33.791 "seek_data": false, 00:19:33.791 "copy": true, 00:19:33.791 "nvme_iov_md": false 00:19:33.791 }, 00:19:33.791 "memory_domains": [ 00:19:33.791 { 00:19:33.791 "dma_device_id": "system", 00:19:33.791 "dma_device_type": 1 00:19:33.791 }, 00:19:33.791 { 00:19:33.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.791 "dma_device_type": 2 00:19:33.791 } 00:19:33.791 ], 00:19:33.791 "driver_specific": {} 00:19:33.791 } 00:19:33.791 ] 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.791 [2024-11-27 08:51:30.511170] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:33.791 [2024-11-27 08:51:30.511234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:33.791 [2024-11-27 08:51:30.511269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:33.791 [2024-11-27 08:51:30.513955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:33.791 [2024-11-27 08:51:30.514161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.791 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.087 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.087 "name": "Existed_Raid", 00:19:34.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.087 "strip_size_kb": 64, 00:19:34.087 "state": "configuring", 00:19:34.087 "raid_level": "raid5f", 00:19:34.087 "superblock": false, 00:19:34.087 "num_base_bdevs": 4, 00:19:34.088 "num_base_bdevs_discovered": 3, 00:19:34.088 "num_base_bdevs_operational": 4, 00:19:34.088 "base_bdevs_list": [ 00:19:34.088 { 00:19:34.088 "name": "BaseBdev1", 00:19:34.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.088 "is_configured": false, 00:19:34.088 "data_offset": 0, 00:19:34.088 "data_size": 0 00:19:34.088 }, 00:19:34.088 { 00:19:34.088 "name": "BaseBdev2", 00:19:34.088 "uuid": "9edcd2e0-80f7-493f-9201-b45bd14ecbdd", 00:19:34.088 "is_configured": true, 00:19:34.088 "data_offset": 0, 00:19:34.088 "data_size": 65536 00:19:34.088 }, 00:19:34.088 { 00:19:34.088 "name": "BaseBdev3", 00:19:34.088 "uuid": "0670f07b-1526-4f2b-b2ed-27a3ffb0e73d", 00:19:34.088 "is_configured": true, 00:19:34.088 "data_offset": 0, 00:19:34.088 "data_size": 65536 00:19:34.088 }, 00:19:34.088 { 00:19:34.088 "name": "BaseBdev4", 00:19:34.088 "uuid": "ddf8a813-c90b-4095-9b1a-279a320cd3e9", 00:19:34.088 "is_configured": true, 00:19:34.088 "data_offset": 0, 00:19:34.088 "data_size": 65536 00:19:34.088 } 00:19:34.088 ] 00:19:34.088 }' 00:19:34.088 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.088 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.362 [2024-11-27 08:51:31.055411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.362 "name": "Existed_Raid", 00:19:34.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.362 "strip_size_kb": 64, 00:19:34.362 "state": "configuring", 00:19:34.362 "raid_level": "raid5f", 00:19:34.362 "superblock": false, 00:19:34.362 "num_base_bdevs": 4, 00:19:34.362 "num_base_bdevs_discovered": 2, 00:19:34.362 "num_base_bdevs_operational": 4, 00:19:34.362 "base_bdevs_list": [ 00:19:34.362 { 00:19:34.362 "name": "BaseBdev1", 00:19:34.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.362 "is_configured": false, 00:19:34.362 "data_offset": 0, 00:19:34.362 "data_size": 0 00:19:34.362 }, 00:19:34.362 { 00:19:34.362 "name": null, 00:19:34.362 "uuid": "9edcd2e0-80f7-493f-9201-b45bd14ecbdd", 00:19:34.362 "is_configured": false, 00:19:34.362 "data_offset": 0, 00:19:34.362 "data_size": 65536 00:19:34.362 }, 00:19:34.362 { 00:19:34.362 "name": "BaseBdev3", 00:19:34.362 "uuid": "0670f07b-1526-4f2b-b2ed-27a3ffb0e73d", 00:19:34.362 "is_configured": true, 00:19:34.362 "data_offset": 0, 00:19:34.362 "data_size": 65536 00:19:34.362 }, 00:19:34.362 { 00:19:34.362 "name": "BaseBdev4", 00:19:34.362 "uuid": "ddf8a813-c90b-4095-9b1a-279a320cd3e9", 00:19:34.362 "is_configured": true, 00:19:34.362 "data_offset": 0, 00:19:34.362 "data_size": 65536 00:19:34.362 } 00:19:34.362 ] 00:19:34.362 }' 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.362 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.930 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.930 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.930 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:34.930 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.931 [2024-11-27 08:51:31.673482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:34.931 BaseBdev1 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.931 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.189 [ 00:19:35.189 { 00:19:35.189 "name": "BaseBdev1", 00:19:35.189 "aliases": [ 00:19:35.189 "29396f10-b57f-4e98-b059-a27de146d7d6" 00:19:35.189 ], 00:19:35.189 "product_name": "Malloc disk", 00:19:35.189 "block_size": 512, 00:19:35.189 "num_blocks": 65536, 00:19:35.189 "uuid": "29396f10-b57f-4e98-b059-a27de146d7d6", 00:19:35.189 "assigned_rate_limits": { 00:19:35.189 "rw_ios_per_sec": 0, 00:19:35.189 "rw_mbytes_per_sec": 0, 00:19:35.189 "r_mbytes_per_sec": 0, 00:19:35.189 "w_mbytes_per_sec": 0 00:19:35.189 }, 00:19:35.189 "claimed": true, 00:19:35.189 "claim_type": "exclusive_write", 00:19:35.189 "zoned": false, 00:19:35.189 "supported_io_types": { 00:19:35.189 "read": true, 00:19:35.189 "write": true, 00:19:35.189 "unmap": true, 00:19:35.189 "flush": true, 00:19:35.189 "reset": true, 00:19:35.189 "nvme_admin": false, 00:19:35.189 "nvme_io": false, 00:19:35.189 "nvme_io_md": false, 00:19:35.190 "write_zeroes": true, 00:19:35.190 "zcopy": true, 00:19:35.190 "get_zone_info": false, 00:19:35.190 "zone_management": false, 00:19:35.190 "zone_append": false, 00:19:35.190 "compare": false, 00:19:35.190 "compare_and_write": false, 00:19:35.190 "abort": true, 00:19:35.190 "seek_hole": false, 00:19:35.190 "seek_data": false, 00:19:35.190 "copy": true, 00:19:35.190 "nvme_iov_md": false 00:19:35.190 }, 00:19:35.190 "memory_domains": [ 00:19:35.190 { 00:19:35.190 "dma_device_id": "system", 00:19:35.190 "dma_device_type": 1 00:19:35.190 }, 00:19:35.190 { 00:19:35.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.190 "dma_device_type": 2 00:19:35.190 } 00:19:35.190 ], 00:19:35.190 "driver_specific": {} 00:19:35.190 } 00:19:35.190 ] 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.190 "name": "Existed_Raid", 00:19:35.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.190 "strip_size_kb": 64, 00:19:35.190 "state": "configuring", 00:19:35.190 "raid_level": "raid5f", 00:19:35.190 "superblock": false, 00:19:35.190 "num_base_bdevs": 4, 00:19:35.190 "num_base_bdevs_discovered": 3, 00:19:35.190 "num_base_bdevs_operational": 4, 00:19:35.190 "base_bdevs_list": [ 00:19:35.190 { 00:19:35.190 "name": "BaseBdev1", 00:19:35.190 "uuid": "29396f10-b57f-4e98-b059-a27de146d7d6", 00:19:35.190 "is_configured": true, 00:19:35.190 "data_offset": 0, 00:19:35.190 "data_size": 65536 00:19:35.190 }, 00:19:35.190 { 00:19:35.190 "name": null, 00:19:35.190 "uuid": "9edcd2e0-80f7-493f-9201-b45bd14ecbdd", 00:19:35.190 "is_configured": false, 00:19:35.190 "data_offset": 0, 00:19:35.190 "data_size": 65536 00:19:35.190 }, 00:19:35.190 { 00:19:35.190 "name": "BaseBdev3", 00:19:35.190 "uuid": "0670f07b-1526-4f2b-b2ed-27a3ffb0e73d", 00:19:35.190 "is_configured": true, 00:19:35.190 "data_offset": 0, 00:19:35.190 "data_size": 65536 00:19:35.190 }, 00:19:35.190 { 00:19:35.190 "name": "BaseBdev4", 00:19:35.190 "uuid": "ddf8a813-c90b-4095-9b1a-279a320cd3e9", 00:19:35.190 "is_configured": true, 00:19:35.190 "data_offset": 0, 00:19:35.190 "data_size": 65536 00:19:35.190 } 00:19:35.190 ] 00:19:35.190 }' 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.190 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.758 [2024-11-27 08:51:32.281734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.758 "name": "Existed_Raid", 00:19:35.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.758 "strip_size_kb": 64, 00:19:35.758 "state": "configuring", 00:19:35.758 "raid_level": "raid5f", 00:19:35.758 "superblock": false, 00:19:35.758 "num_base_bdevs": 4, 00:19:35.758 "num_base_bdevs_discovered": 2, 00:19:35.758 "num_base_bdevs_operational": 4, 00:19:35.758 "base_bdevs_list": [ 00:19:35.758 { 00:19:35.758 "name": "BaseBdev1", 00:19:35.758 "uuid": "29396f10-b57f-4e98-b059-a27de146d7d6", 00:19:35.758 "is_configured": true, 00:19:35.758 "data_offset": 0, 00:19:35.758 "data_size": 65536 00:19:35.758 }, 00:19:35.758 { 00:19:35.758 "name": null, 00:19:35.758 "uuid": "9edcd2e0-80f7-493f-9201-b45bd14ecbdd", 00:19:35.758 "is_configured": false, 00:19:35.758 "data_offset": 0, 00:19:35.758 "data_size": 65536 00:19:35.758 }, 00:19:35.758 { 00:19:35.758 "name": null, 00:19:35.758 "uuid": "0670f07b-1526-4f2b-b2ed-27a3ffb0e73d", 00:19:35.758 "is_configured": false, 00:19:35.758 "data_offset": 0, 00:19:35.758 "data_size": 65536 00:19:35.758 }, 00:19:35.758 { 00:19:35.758 "name": "BaseBdev4", 00:19:35.758 "uuid": "ddf8a813-c90b-4095-9b1a-279a320cd3e9", 00:19:35.758 "is_configured": true, 00:19:35.758 "data_offset": 0, 00:19:35.758 "data_size": 65536 00:19:35.758 } 00:19:35.758 ] 00:19:35.758 }' 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.758 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.326 [2024-11-27 08:51:32.845882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.326 "name": "Existed_Raid", 00:19:36.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.326 "strip_size_kb": 64, 00:19:36.326 "state": "configuring", 00:19:36.326 "raid_level": "raid5f", 00:19:36.326 "superblock": false, 00:19:36.326 "num_base_bdevs": 4, 00:19:36.326 "num_base_bdevs_discovered": 3, 00:19:36.326 "num_base_bdevs_operational": 4, 00:19:36.326 "base_bdevs_list": [ 00:19:36.326 { 00:19:36.326 "name": "BaseBdev1", 00:19:36.326 "uuid": "29396f10-b57f-4e98-b059-a27de146d7d6", 00:19:36.326 "is_configured": true, 00:19:36.326 "data_offset": 0, 00:19:36.326 "data_size": 65536 00:19:36.326 }, 00:19:36.326 { 00:19:36.326 "name": null, 00:19:36.326 "uuid": "9edcd2e0-80f7-493f-9201-b45bd14ecbdd", 00:19:36.326 "is_configured": false, 00:19:36.326 "data_offset": 0, 00:19:36.326 "data_size": 65536 00:19:36.326 }, 00:19:36.326 { 00:19:36.326 "name": "BaseBdev3", 00:19:36.326 "uuid": "0670f07b-1526-4f2b-b2ed-27a3ffb0e73d", 00:19:36.326 "is_configured": true, 00:19:36.326 "data_offset": 0, 00:19:36.326 "data_size": 65536 00:19:36.326 }, 00:19:36.326 { 00:19:36.326 "name": "BaseBdev4", 00:19:36.326 "uuid": "ddf8a813-c90b-4095-9b1a-279a320cd3e9", 00:19:36.326 "is_configured": true, 00:19:36.326 "data_offset": 0, 00:19:36.326 "data_size": 65536 00:19:36.326 } 00:19:36.326 ] 00:19:36.326 }' 00:19:36.326 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.327 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.893 [2024-11-27 08:51:33.454027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.893 "name": "Existed_Raid", 00:19:36.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.893 "strip_size_kb": 64, 00:19:36.893 "state": "configuring", 00:19:36.893 "raid_level": "raid5f", 00:19:36.893 "superblock": false, 00:19:36.893 "num_base_bdevs": 4, 00:19:36.893 "num_base_bdevs_discovered": 2, 00:19:36.893 "num_base_bdevs_operational": 4, 00:19:36.893 "base_bdevs_list": [ 00:19:36.893 { 00:19:36.893 "name": null, 00:19:36.893 "uuid": "29396f10-b57f-4e98-b059-a27de146d7d6", 00:19:36.893 "is_configured": false, 00:19:36.893 "data_offset": 0, 00:19:36.893 "data_size": 65536 00:19:36.893 }, 00:19:36.893 { 00:19:36.893 "name": null, 00:19:36.893 "uuid": "9edcd2e0-80f7-493f-9201-b45bd14ecbdd", 00:19:36.893 "is_configured": false, 00:19:36.893 "data_offset": 0, 00:19:36.893 "data_size": 65536 00:19:36.893 }, 00:19:36.893 { 00:19:36.893 "name": "BaseBdev3", 00:19:36.893 "uuid": "0670f07b-1526-4f2b-b2ed-27a3ffb0e73d", 00:19:36.893 "is_configured": true, 00:19:36.893 "data_offset": 0, 00:19:36.893 "data_size": 65536 00:19:36.893 }, 00:19:36.893 { 00:19:36.893 "name": "BaseBdev4", 00:19:36.893 "uuid": "ddf8a813-c90b-4095-9b1a-279a320cd3e9", 00:19:36.893 "is_configured": true, 00:19:36.893 "data_offset": 0, 00:19:36.893 "data_size": 65536 00:19:36.893 } 00:19:36.893 ] 00:19:36.893 }' 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.893 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.461 [2024-11-27 08:51:34.112973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.461 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.462 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.462 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.462 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.462 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.462 "name": "Existed_Raid", 00:19:37.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.462 "strip_size_kb": 64, 00:19:37.462 "state": "configuring", 00:19:37.462 "raid_level": "raid5f", 00:19:37.462 "superblock": false, 00:19:37.462 "num_base_bdevs": 4, 00:19:37.462 "num_base_bdevs_discovered": 3, 00:19:37.462 "num_base_bdevs_operational": 4, 00:19:37.462 "base_bdevs_list": [ 00:19:37.462 { 00:19:37.462 "name": null, 00:19:37.462 "uuid": "29396f10-b57f-4e98-b059-a27de146d7d6", 00:19:37.462 "is_configured": false, 00:19:37.462 "data_offset": 0, 00:19:37.462 "data_size": 65536 00:19:37.462 }, 00:19:37.462 { 00:19:37.462 "name": "BaseBdev2", 00:19:37.462 "uuid": "9edcd2e0-80f7-493f-9201-b45bd14ecbdd", 00:19:37.462 "is_configured": true, 00:19:37.462 "data_offset": 0, 00:19:37.462 "data_size": 65536 00:19:37.462 }, 00:19:37.462 { 00:19:37.462 "name": "BaseBdev3", 00:19:37.462 "uuid": "0670f07b-1526-4f2b-b2ed-27a3ffb0e73d", 00:19:37.462 "is_configured": true, 00:19:37.462 "data_offset": 0, 00:19:37.462 "data_size": 65536 00:19:37.462 }, 00:19:37.462 { 00:19:37.462 "name": "BaseBdev4", 00:19:37.462 "uuid": "ddf8a813-c90b-4095-9b1a-279a320cd3e9", 00:19:37.462 "is_configured": true, 00:19:37.462 "data_offset": 0, 00:19:37.462 "data_size": 65536 00:19:37.462 } 00:19:37.462 ] 00:19:37.462 }' 00:19:37.462 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.462 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.028 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.028 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:38.028 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.028 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.028 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.028 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:38.028 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.028 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:38.028 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.028 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.028 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.028 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 29396f10-b57f-4e98-b059-a27de146d7d6 00:19:38.028 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.028 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.028 [2024-11-27 08:51:34.767242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:38.028 [2024-11-27 08:51:34.767645] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:38.028 [2024-11-27 08:51:34.767670] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:38.029 [2024-11-27 08:51:34.768051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:38.029 [2024-11-27 08:51:34.774945] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:38.029 [2024-11-27 08:51:34.775095] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:38.029 [2024-11-27 08:51:34.775615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.029 NewBaseBdev 00:19:38.029 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.029 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:38.029 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:19:38.029 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:38.029 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local i 00:19:38.029 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:38.029 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:38.029 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:38.029 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.029 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.287 [ 00:19:38.287 { 00:19:38.287 "name": "NewBaseBdev", 00:19:38.287 "aliases": [ 00:19:38.287 "29396f10-b57f-4e98-b059-a27de146d7d6" 00:19:38.287 ], 00:19:38.287 "product_name": "Malloc disk", 00:19:38.287 "block_size": 512, 00:19:38.287 "num_blocks": 65536, 00:19:38.287 "uuid": "29396f10-b57f-4e98-b059-a27de146d7d6", 00:19:38.287 "assigned_rate_limits": { 00:19:38.287 "rw_ios_per_sec": 0, 00:19:38.287 "rw_mbytes_per_sec": 0, 00:19:38.287 "r_mbytes_per_sec": 0, 00:19:38.287 "w_mbytes_per_sec": 0 00:19:38.287 }, 00:19:38.287 "claimed": true, 00:19:38.287 "claim_type": "exclusive_write", 00:19:38.287 "zoned": false, 00:19:38.287 "supported_io_types": { 00:19:38.287 "read": true, 00:19:38.287 "write": true, 00:19:38.287 "unmap": true, 00:19:38.287 "flush": true, 00:19:38.287 "reset": true, 00:19:38.287 "nvme_admin": false, 00:19:38.287 "nvme_io": false, 00:19:38.287 "nvme_io_md": false, 00:19:38.287 "write_zeroes": true, 00:19:38.287 "zcopy": true, 00:19:38.287 "get_zone_info": false, 00:19:38.287 "zone_management": false, 00:19:38.287 "zone_append": false, 00:19:38.287 "compare": false, 00:19:38.287 "compare_and_write": false, 00:19:38.287 "abort": true, 00:19:38.287 "seek_hole": false, 00:19:38.287 "seek_data": false, 00:19:38.287 "copy": true, 00:19:38.287 "nvme_iov_md": false 00:19:38.287 }, 00:19:38.287 "memory_domains": [ 00:19:38.287 { 00:19:38.287 "dma_device_id": "system", 00:19:38.287 "dma_device_type": 1 00:19:38.287 }, 00:19:38.287 { 00:19:38.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.287 "dma_device_type": 2 00:19:38.287 } 00:19:38.287 ], 00:19:38.287 "driver_specific": {} 00:19:38.287 } 00:19:38.287 ] 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # return 0 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.287 "name": "Existed_Raid", 00:19:38.287 "uuid": "e5a9b72c-f0c0-47c2-baf3-beaa349938f0", 00:19:38.287 "strip_size_kb": 64, 00:19:38.287 "state": "online", 00:19:38.287 "raid_level": "raid5f", 00:19:38.287 "superblock": false, 00:19:38.287 "num_base_bdevs": 4, 00:19:38.287 "num_base_bdevs_discovered": 4, 00:19:38.287 "num_base_bdevs_operational": 4, 00:19:38.287 "base_bdevs_list": [ 00:19:38.287 { 00:19:38.287 "name": "NewBaseBdev", 00:19:38.287 "uuid": "29396f10-b57f-4e98-b059-a27de146d7d6", 00:19:38.287 "is_configured": true, 00:19:38.287 "data_offset": 0, 00:19:38.287 "data_size": 65536 00:19:38.287 }, 00:19:38.287 { 00:19:38.287 "name": "BaseBdev2", 00:19:38.287 "uuid": "9edcd2e0-80f7-493f-9201-b45bd14ecbdd", 00:19:38.287 "is_configured": true, 00:19:38.287 "data_offset": 0, 00:19:38.287 "data_size": 65536 00:19:38.287 }, 00:19:38.287 { 00:19:38.287 "name": "BaseBdev3", 00:19:38.287 "uuid": "0670f07b-1526-4f2b-b2ed-27a3ffb0e73d", 00:19:38.287 "is_configured": true, 00:19:38.287 "data_offset": 0, 00:19:38.287 "data_size": 65536 00:19:38.287 }, 00:19:38.287 { 00:19:38.287 "name": "BaseBdev4", 00:19:38.287 "uuid": "ddf8a813-c90b-4095-9b1a-279a320cd3e9", 00:19:38.287 "is_configured": true, 00:19:38.287 "data_offset": 0, 00:19:38.287 "data_size": 65536 00:19:38.287 } 00:19:38.287 ] 00:19:38.287 }' 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.287 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.545 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:38.545 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:38.545 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:38.545 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:38.545 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:38.545 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:38.805 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:38.805 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.805 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.805 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:38.805 [2024-11-27 08:51:35.308273] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.805 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.805 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:38.805 "name": "Existed_Raid", 00:19:38.805 "aliases": [ 00:19:38.805 "e5a9b72c-f0c0-47c2-baf3-beaa349938f0" 00:19:38.805 ], 00:19:38.805 "product_name": "Raid Volume", 00:19:38.805 "block_size": 512, 00:19:38.805 "num_blocks": 196608, 00:19:38.805 "uuid": "e5a9b72c-f0c0-47c2-baf3-beaa349938f0", 00:19:38.805 "assigned_rate_limits": { 00:19:38.805 "rw_ios_per_sec": 0, 00:19:38.805 "rw_mbytes_per_sec": 0, 00:19:38.805 "r_mbytes_per_sec": 0, 00:19:38.805 "w_mbytes_per_sec": 0 00:19:38.805 }, 00:19:38.805 "claimed": false, 00:19:38.805 "zoned": false, 00:19:38.805 "supported_io_types": { 00:19:38.805 "read": true, 00:19:38.805 "write": true, 00:19:38.805 "unmap": false, 00:19:38.805 "flush": false, 00:19:38.805 "reset": true, 00:19:38.805 "nvme_admin": false, 00:19:38.805 "nvme_io": false, 00:19:38.805 "nvme_io_md": false, 00:19:38.805 "write_zeroes": true, 00:19:38.805 "zcopy": false, 00:19:38.805 "get_zone_info": false, 00:19:38.805 "zone_management": false, 00:19:38.805 "zone_append": false, 00:19:38.805 "compare": false, 00:19:38.805 "compare_and_write": false, 00:19:38.805 "abort": false, 00:19:38.805 "seek_hole": false, 00:19:38.805 "seek_data": false, 00:19:38.805 "copy": false, 00:19:38.805 "nvme_iov_md": false 00:19:38.805 }, 00:19:38.805 "driver_specific": { 00:19:38.805 "raid": { 00:19:38.805 "uuid": "e5a9b72c-f0c0-47c2-baf3-beaa349938f0", 00:19:38.805 "strip_size_kb": 64, 00:19:38.805 "state": "online", 00:19:38.805 "raid_level": "raid5f", 00:19:38.805 "superblock": false, 00:19:38.805 "num_base_bdevs": 4, 00:19:38.805 "num_base_bdevs_discovered": 4, 00:19:38.805 "num_base_bdevs_operational": 4, 00:19:38.805 "base_bdevs_list": [ 00:19:38.805 { 00:19:38.805 "name": "NewBaseBdev", 00:19:38.805 "uuid": "29396f10-b57f-4e98-b059-a27de146d7d6", 00:19:38.805 "is_configured": true, 00:19:38.805 "data_offset": 0, 00:19:38.805 "data_size": 65536 00:19:38.805 }, 00:19:38.805 { 00:19:38.805 "name": "BaseBdev2", 00:19:38.805 "uuid": "9edcd2e0-80f7-493f-9201-b45bd14ecbdd", 00:19:38.805 "is_configured": true, 00:19:38.805 "data_offset": 0, 00:19:38.805 "data_size": 65536 00:19:38.805 }, 00:19:38.805 { 00:19:38.805 "name": "BaseBdev3", 00:19:38.805 "uuid": "0670f07b-1526-4f2b-b2ed-27a3ffb0e73d", 00:19:38.805 "is_configured": true, 00:19:38.805 "data_offset": 0, 00:19:38.805 "data_size": 65536 00:19:38.805 }, 00:19:38.805 { 00:19:38.805 "name": "BaseBdev4", 00:19:38.805 "uuid": "ddf8a813-c90b-4095-9b1a-279a320cd3e9", 00:19:38.805 "is_configured": true, 00:19:38.805 "data_offset": 0, 00:19:38.805 "data_size": 65536 00:19:38.805 } 00:19:38.805 ] 00:19:38.805 } 00:19:38.805 } 00:19:38.805 }' 00:19:38.805 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:38.805 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:38.805 BaseBdev2 00:19:38.805 BaseBdev3 00:19:38.805 BaseBdev4' 00:19:38.805 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.805 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:38.805 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:38.805 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:38.805 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.806 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.065 [2024-11-27 08:51:35.652030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:39.065 [2024-11-27 08:51:35.652094] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:39.065 [2024-11-27 08:51:35.652216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.065 [2024-11-27 08:51:35.652680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:39.065 [2024-11-27 08:51:35.652710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83294 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # '[' -z 83294 ']' 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # kill -0 83294 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # uname 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 83294 00:19:39.065 killing process with pid 83294 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 83294' 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # kill 83294 00:19:39.065 [2024-11-27 08:51:35.693464] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:39.065 08:51:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@975 -- # wait 83294 00:19:39.325 [2024-11-27 08:51:36.076070] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:40.733 00:19:40.733 real 0m12.948s 00:19:40.733 user 0m21.138s 00:19:40.733 sys 0m2.015s 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.733 ************************************ 00:19:40.733 END TEST raid5f_state_function_test 00:19:40.733 ************************************ 00:19:40.733 08:51:37 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:19:40.733 08:51:37 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:19:40.733 08:51:37 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:19:40.733 08:51:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.733 ************************************ 00:19:40.733 START TEST raid5f_state_function_test_sb 00:19:40.733 ************************************ 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # raid_state_function_test raid5f 4 true 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83971 00:19:40.733 Process raid pid: 83971 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83971' 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83971 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@832 -- # '[' -z 83971 ']' 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local max_retries=100 00:19:40.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@841 -- # xtrace_disable 00:19:40.733 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.733 [2024-11-27 08:51:37.360810] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:19:40.733 [2024-11-27 08:51:37.360983] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.992 [2024-11-27 08:51:37.537842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.992 [2024-11-27 08:51:37.685793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.251 [2024-11-27 08:51:37.913121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.251 [2024-11-27 08:51:37.913172] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@865 -- # return 0 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.817 [2024-11-27 08:51:38.367285] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:41.817 [2024-11-27 08:51:38.367383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:41.817 [2024-11-27 08:51:38.367404] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:41.817 [2024-11-27 08:51:38.367421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:41.817 [2024-11-27 08:51:38.367431] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:41.817 [2024-11-27 08:51:38.367446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:41.817 [2024-11-27 08:51:38.367455] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:41.817 [2024-11-27 08:51:38.367470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.817 "name": "Existed_Raid", 00:19:41.817 "uuid": "aca82727-2181-486e-a02f-7ec0ea49a17f", 00:19:41.817 "strip_size_kb": 64, 00:19:41.817 "state": "configuring", 00:19:41.817 "raid_level": "raid5f", 00:19:41.817 "superblock": true, 00:19:41.817 "num_base_bdevs": 4, 00:19:41.817 "num_base_bdevs_discovered": 0, 00:19:41.817 "num_base_bdevs_operational": 4, 00:19:41.817 "base_bdevs_list": [ 00:19:41.817 { 00:19:41.817 "name": "BaseBdev1", 00:19:41.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.817 "is_configured": false, 00:19:41.817 "data_offset": 0, 00:19:41.817 "data_size": 0 00:19:41.817 }, 00:19:41.817 { 00:19:41.817 "name": "BaseBdev2", 00:19:41.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.817 "is_configured": false, 00:19:41.817 "data_offset": 0, 00:19:41.817 "data_size": 0 00:19:41.817 }, 00:19:41.817 { 00:19:41.817 "name": "BaseBdev3", 00:19:41.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.817 "is_configured": false, 00:19:41.817 "data_offset": 0, 00:19:41.817 "data_size": 0 00:19:41.817 }, 00:19:41.817 { 00:19:41.817 "name": "BaseBdev4", 00:19:41.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.817 "is_configured": false, 00:19:41.817 "data_offset": 0, 00:19:41.817 "data_size": 0 00:19:41.817 } 00:19:41.817 ] 00:19:41.817 }' 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.817 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.075 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:42.075 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.075 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.075 [2024-11-27 08:51:38.831369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:42.075 [2024-11-27 08:51:38.831426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:42.357 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.357 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:42.357 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.357 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.357 [2024-11-27 08:51:38.839360] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:42.357 [2024-11-27 08:51:38.839420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:42.357 [2024-11-27 08:51:38.839435] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:42.357 [2024-11-27 08:51:38.839452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:42.357 [2024-11-27 08:51:38.839462] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:42.357 [2024-11-27 08:51:38.839476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:42.357 [2024-11-27 08:51:38.839486] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:42.357 [2024-11-27 08:51:38.839500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:42.357 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.357 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.358 [2024-11-27 08:51:38.887798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:42.358 BaseBdev1 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.358 [ 00:19:42.358 { 00:19:42.358 "name": "BaseBdev1", 00:19:42.358 "aliases": [ 00:19:42.358 "3dd3991e-c069-40fb-a8cd-edb72eecc80d" 00:19:42.358 ], 00:19:42.358 "product_name": "Malloc disk", 00:19:42.358 "block_size": 512, 00:19:42.358 "num_blocks": 65536, 00:19:42.358 "uuid": "3dd3991e-c069-40fb-a8cd-edb72eecc80d", 00:19:42.358 "assigned_rate_limits": { 00:19:42.358 "rw_ios_per_sec": 0, 00:19:42.358 "rw_mbytes_per_sec": 0, 00:19:42.358 "r_mbytes_per_sec": 0, 00:19:42.358 "w_mbytes_per_sec": 0 00:19:42.358 }, 00:19:42.358 "claimed": true, 00:19:42.358 "claim_type": "exclusive_write", 00:19:42.358 "zoned": false, 00:19:42.358 "supported_io_types": { 00:19:42.358 "read": true, 00:19:42.358 "write": true, 00:19:42.358 "unmap": true, 00:19:42.358 "flush": true, 00:19:42.358 "reset": true, 00:19:42.358 "nvme_admin": false, 00:19:42.358 "nvme_io": false, 00:19:42.358 "nvme_io_md": false, 00:19:42.358 "write_zeroes": true, 00:19:42.358 "zcopy": true, 00:19:42.358 "get_zone_info": false, 00:19:42.358 "zone_management": false, 00:19:42.358 "zone_append": false, 00:19:42.358 "compare": false, 00:19:42.358 "compare_and_write": false, 00:19:42.358 "abort": true, 00:19:42.358 "seek_hole": false, 00:19:42.358 "seek_data": false, 00:19:42.358 "copy": true, 00:19:42.358 "nvme_iov_md": false 00:19:42.358 }, 00:19:42.358 "memory_domains": [ 00:19:42.358 { 00:19:42.358 "dma_device_id": "system", 00:19:42.358 "dma_device_type": 1 00:19:42.358 }, 00:19:42.358 { 00:19:42.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.358 "dma_device_type": 2 00:19:42.358 } 00:19:42.358 ], 00:19:42.358 "driver_specific": {} 00:19:42.358 } 00:19:42.358 ] 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.358 "name": "Existed_Raid", 00:19:42.358 "uuid": "9af47c4f-194a-47ec-b38b-0b94981b3652", 00:19:42.358 "strip_size_kb": 64, 00:19:42.358 "state": "configuring", 00:19:42.358 "raid_level": "raid5f", 00:19:42.358 "superblock": true, 00:19:42.358 "num_base_bdevs": 4, 00:19:42.358 "num_base_bdevs_discovered": 1, 00:19:42.358 "num_base_bdevs_operational": 4, 00:19:42.358 "base_bdevs_list": [ 00:19:42.358 { 00:19:42.358 "name": "BaseBdev1", 00:19:42.358 "uuid": "3dd3991e-c069-40fb-a8cd-edb72eecc80d", 00:19:42.358 "is_configured": true, 00:19:42.358 "data_offset": 2048, 00:19:42.358 "data_size": 63488 00:19:42.358 }, 00:19:42.358 { 00:19:42.358 "name": "BaseBdev2", 00:19:42.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.358 "is_configured": false, 00:19:42.358 "data_offset": 0, 00:19:42.358 "data_size": 0 00:19:42.358 }, 00:19:42.358 { 00:19:42.358 "name": "BaseBdev3", 00:19:42.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.358 "is_configured": false, 00:19:42.358 "data_offset": 0, 00:19:42.358 "data_size": 0 00:19:42.358 }, 00:19:42.358 { 00:19:42.358 "name": "BaseBdev4", 00:19:42.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.358 "is_configured": false, 00:19:42.358 "data_offset": 0, 00:19:42.358 "data_size": 0 00:19:42.358 } 00:19:42.358 ] 00:19:42.358 }' 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.358 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.936 [2024-11-27 08:51:39.412004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:42.936 [2024-11-27 08:51:39.412083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.936 [2024-11-27 08:51:39.420058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:42.936 [2024-11-27 08:51:39.422735] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:42.936 [2024-11-27 08:51:39.422793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:42.936 [2024-11-27 08:51:39.422810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:42.936 [2024-11-27 08:51:39.422827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:42.936 [2024-11-27 08:51:39.422837] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:42.936 [2024-11-27 08:51:39.422851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.936 "name": "Existed_Raid", 00:19:42.936 "uuid": "2e401a69-dfa4-4dcd-842a-7bdb5e3b2185", 00:19:42.936 "strip_size_kb": 64, 00:19:42.936 "state": "configuring", 00:19:42.936 "raid_level": "raid5f", 00:19:42.936 "superblock": true, 00:19:42.936 "num_base_bdevs": 4, 00:19:42.936 "num_base_bdevs_discovered": 1, 00:19:42.936 "num_base_bdevs_operational": 4, 00:19:42.936 "base_bdevs_list": [ 00:19:42.936 { 00:19:42.936 "name": "BaseBdev1", 00:19:42.936 "uuid": "3dd3991e-c069-40fb-a8cd-edb72eecc80d", 00:19:42.936 "is_configured": true, 00:19:42.936 "data_offset": 2048, 00:19:42.936 "data_size": 63488 00:19:42.936 }, 00:19:42.936 { 00:19:42.936 "name": "BaseBdev2", 00:19:42.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.936 "is_configured": false, 00:19:42.936 "data_offset": 0, 00:19:42.936 "data_size": 0 00:19:42.936 }, 00:19:42.936 { 00:19:42.936 "name": "BaseBdev3", 00:19:42.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.936 "is_configured": false, 00:19:42.936 "data_offset": 0, 00:19:42.936 "data_size": 0 00:19:42.936 }, 00:19:42.936 { 00:19:42.936 "name": "BaseBdev4", 00:19:42.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.936 "is_configured": false, 00:19:42.936 "data_offset": 0, 00:19:42.936 "data_size": 0 00:19:42.936 } 00:19:42.936 ] 00:19:42.936 }' 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.936 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.195 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:43.195 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.195 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.453 [2024-11-27 08:51:39.966187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:43.453 BaseBdev2 00:19:43.453 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.453 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:43.453 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:19:43.453 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:43.453 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:19:43.453 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:43.453 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:43.453 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:43.453 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.453 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.453 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.453 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:43.453 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.453 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.453 [ 00:19:43.453 { 00:19:43.453 "name": "BaseBdev2", 00:19:43.453 "aliases": [ 00:19:43.453 "6620851c-bc75-43ed-aa04-b15277e00981" 00:19:43.453 ], 00:19:43.453 "product_name": "Malloc disk", 00:19:43.453 "block_size": 512, 00:19:43.453 "num_blocks": 65536, 00:19:43.453 "uuid": "6620851c-bc75-43ed-aa04-b15277e00981", 00:19:43.453 "assigned_rate_limits": { 00:19:43.453 "rw_ios_per_sec": 0, 00:19:43.453 "rw_mbytes_per_sec": 0, 00:19:43.453 "r_mbytes_per_sec": 0, 00:19:43.453 "w_mbytes_per_sec": 0 00:19:43.453 }, 00:19:43.453 "claimed": true, 00:19:43.453 "claim_type": "exclusive_write", 00:19:43.453 "zoned": false, 00:19:43.453 "supported_io_types": { 00:19:43.453 "read": true, 00:19:43.453 "write": true, 00:19:43.453 "unmap": true, 00:19:43.453 "flush": true, 00:19:43.453 "reset": true, 00:19:43.453 "nvme_admin": false, 00:19:43.453 "nvme_io": false, 00:19:43.453 "nvme_io_md": false, 00:19:43.453 "write_zeroes": true, 00:19:43.453 "zcopy": true, 00:19:43.453 "get_zone_info": false, 00:19:43.453 "zone_management": false, 00:19:43.453 "zone_append": false, 00:19:43.453 "compare": false, 00:19:43.453 "compare_and_write": false, 00:19:43.453 "abort": true, 00:19:43.453 "seek_hole": false, 00:19:43.453 "seek_data": false, 00:19:43.453 "copy": true, 00:19:43.453 "nvme_iov_md": false 00:19:43.453 }, 00:19:43.453 "memory_domains": [ 00:19:43.453 { 00:19:43.453 "dma_device_id": "system", 00:19:43.454 "dma_device_type": 1 00:19:43.454 }, 00:19:43.454 { 00:19:43.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.454 "dma_device_type": 2 00:19:43.454 } 00:19:43.454 ], 00:19:43.454 "driver_specific": {} 00:19:43.454 } 00:19:43.454 ] 00:19:43.454 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.454 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:19:43.454 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:43.454 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:43.454 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:43.454 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:43.454 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:43.454 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:43.454 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.454 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:43.454 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.454 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.454 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.454 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.454 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.454 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.454 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.454 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.454 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.454 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.454 "name": "Existed_Raid", 00:19:43.454 "uuid": "2e401a69-dfa4-4dcd-842a-7bdb5e3b2185", 00:19:43.454 "strip_size_kb": 64, 00:19:43.454 "state": "configuring", 00:19:43.454 "raid_level": "raid5f", 00:19:43.454 "superblock": true, 00:19:43.454 "num_base_bdevs": 4, 00:19:43.454 "num_base_bdevs_discovered": 2, 00:19:43.454 "num_base_bdevs_operational": 4, 00:19:43.454 "base_bdevs_list": [ 00:19:43.454 { 00:19:43.454 "name": "BaseBdev1", 00:19:43.454 "uuid": "3dd3991e-c069-40fb-a8cd-edb72eecc80d", 00:19:43.454 "is_configured": true, 00:19:43.454 "data_offset": 2048, 00:19:43.454 "data_size": 63488 00:19:43.454 }, 00:19:43.454 { 00:19:43.454 "name": "BaseBdev2", 00:19:43.454 "uuid": "6620851c-bc75-43ed-aa04-b15277e00981", 00:19:43.454 "is_configured": true, 00:19:43.454 "data_offset": 2048, 00:19:43.454 "data_size": 63488 00:19:43.454 }, 00:19:43.454 { 00:19:43.454 "name": "BaseBdev3", 00:19:43.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.454 "is_configured": false, 00:19:43.454 "data_offset": 0, 00:19:43.454 "data_size": 0 00:19:43.454 }, 00:19:43.454 { 00:19:43.454 "name": "BaseBdev4", 00:19:43.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.454 "is_configured": false, 00:19:43.454 "data_offset": 0, 00:19:43.454 "data_size": 0 00:19:43.454 } 00:19:43.454 ] 00:19:43.454 }' 00:19:43.454 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.454 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.019 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.020 [2024-11-27 08:51:40.557208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:44.020 BaseBdev3 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.020 [ 00:19:44.020 { 00:19:44.020 "name": "BaseBdev3", 00:19:44.020 "aliases": [ 00:19:44.020 "18c75446-59c5-4071-9dd7-28445f03954c" 00:19:44.020 ], 00:19:44.020 "product_name": "Malloc disk", 00:19:44.020 "block_size": 512, 00:19:44.020 "num_blocks": 65536, 00:19:44.020 "uuid": "18c75446-59c5-4071-9dd7-28445f03954c", 00:19:44.020 "assigned_rate_limits": { 00:19:44.020 "rw_ios_per_sec": 0, 00:19:44.020 "rw_mbytes_per_sec": 0, 00:19:44.020 "r_mbytes_per_sec": 0, 00:19:44.020 "w_mbytes_per_sec": 0 00:19:44.020 }, 00:19:44.020 "claimed": true, 00:19:44.020 "claim_type": "exclusive_write", 00:19:44.020 "zoned": false, 00:19:44.020 "supported_io_types": { 00:19:44.020 "read": true, 00:19:44.020 "write": true, 00:19:44.020 "unmap": true, 00:19:44.020 "flush": true, 00:19:44.020 "reset": true, 00:19:44.020 "nvme_admin": false, 00:19:44.020 "nvme_io": false, 00:19:44.020 "nvme_io_md": false, 00:19:44.020 "write_zeroes": true, 00:19:44.020 "zcopy": true, 00:19:44.020 "get_zone_info": false, 00:19:44.020 "zone_management": false, 00:19:44.020 "zone_append": false, 00:19:44.020 "compare": false, 00:19:44.020 "compare_and_write": false, 00:19:44.020 "abort": true, 00:19:44.020 "seek_hole": false, 00:19:44.020 "seek_data": false, 00:19:44.020 "copy": true, 00:19:44.020 "nvme_iov_md": false 00:19:44.020 }, 00:19:44.020 "memory_domains": [ 00:19:44.020 { 00:19:44.020 "dma_device_id": "system", 00:19:44.020 "dma_device_type": 1 00:19:44.020 }, 00:19:44.020 { 00:19:44.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.020 "dma_device_type": 2 00:19:44.020 } 00:19:44.020 ], 00:19:44.020 "driver_specific": {} 00:19:44.020 } 00:19:44.020 ] 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.020 "name": "Existed_Raid", 00:19:44.020 "uuid": "2e401a69-dfa4-4dcd-842a-7bdb5e3b2185", 00:19:44.020 "strip_size_kb": 64, 00:19:44.020 "state": "configuring", 00:19:44.020 "raid_level": "raid5f", 00:19:44.020 "superblock": true, 00:19:44.020 "num_base_bdevs": 4, 00:19:44.020 "num_base_bdevs_discovered": 3, 00:19:44.020 "num_base_bdevs_operational": 4, 00:19:44.020 "base_bdevs_list": [ 00:19:44.020 { 00:19:44.020 "name": "BaseBdev1", 00:19:44.020 "uuid": "3dd3991e-c069-40fb-a8cd-edb72eecc80d", 00:19:44.020 "is_configured": true, 00:19:44.020 "data_offset": 2048, 00:19:44.020 "data_size": 63488 00:19:44.020 }, 00:19:44.020 { 00:19:44.020 "name": "BaseBdev2", 00:19:44.020 "uuid": "6620851c-bc75-43ed-aa04-b15277e00981", 00:19:44.020 "is_configured": true, 00:19:44.020 "data_offset": 2048, 00:19:44.020 "data_size": 63488 00:19:44.020 }, 00:19:44.020 { 00:19:44.020 "name": "BaseBdev3", 00:19:44.020 "uuid": "18c75446-59c5-4071-9dd7-28445f03954c", 00:19:44.020 "is_configured": true, 00:19:44.020 "data_offset": 2048, 00:19:44.020 "data_size": 63488 00:19:44.020 }, 00:19:44.020 { 00:19:44.020 "name": "BaseBdev4", 00:19:44.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.020 "is_configured": false, 00:19:44.020 "data_offset": 0, 00:19:44.020 "data_size": 0 00:19:44.020 } 00:19:44.020 ] 00:19:44.020 }' 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.020 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.588 [2024-11-27 08:51:41.144056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:44.588 [2024-11-27 08:51:41.144471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:44.588 [2024-11-27 08:51:41.144492] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:44.588 [2024-11-27 08:51:41.144827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:44.588 BaseBdev4 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.588 [2024-11-27 08:51:41.151866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:44.588 [2024-11-27 08:51:41.151903] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:44.588 [2024-11-27 08:51:41.152217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.588 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.588 [ 00:19:44.588 { 00:19:44.588 "name": "BaseBdev4", 00:19:44.588 "aliases": [ 00:19:44.588 "075f5846-cb4c-4d6d-9a63-14ac75e4681e" 00:19:44.588 ], 00:19:44.588 "product_name": "Malloc disk", 00:19:44.588 "block_size": 512, 00:19:44.588 "num_blocks": 65536, 00:19:44.588 "uuid": "075f5846-cb4c-4d6d-9a63-14ac75e4681e", 00:19:44.588 "assigned_rate_limits": { 00:19:44.588 "rw_ios_per_sec": 0, 00:19:44.588 "rw_mbytes_per_sec": 0, 00:19:44.588 "r_mbytes_per_sec": 0, 00:19:44.588 "w_mbytes_per_sec": 0 00:19:44.588 }, 00:19:44.588 "claimed": true, 00:19:44.588 "claim_type": "exclusive_write", 00:19:44.588 "zoned": false, 00:19:44.588 "supported_io_types": { 00:19:44.588 "read": true, 00:19:44.588 "write": true, 00:19:44.588 "unmap": true, 00:19:44.588 "flush": true, 00:19:44.588 "reset": true, 00:19:44.588 "nvme_admin": false, 00:19:44.588 "nvme_io": false, 00:19:44.588 "nvme_io_md": false, 00:19:44.588 "write_zeroes": true, 00:19:44.588 "zcopy": true, 00:19:44.588 "get_zone_info": false, 00:19:44.588 "zone_management": false, 00:19:44.588 "zone_append": false, 00:19:44.588 "compare": false, 00:19:44.588 "compare_and_write": false, 00:19:44.588 "abort": true, 00:19:44.588 "seek_hole": false, 00:19:44.588 "seek_data": false, 00:19:44.588 "copy": true, 00:19:44.588 "nvme_iov_md": false 00:19:44.588 }, 00:19:44.588 "memory_domains": [ 00:19:44.588 { 00:19:44.588 "dma_device_id": "system", 00:19:44.588 "dma_device_type": 1 00:19:44.589 }, 00:19:44.589 { 00:19:44.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.589 "dma_device_type": 2 00:19:44.589 } 00:19:44.589 ], 00:19:44.589 "driver_specific": {} 00:19:44.589 } 00:19:44.589 ] 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.589 "name": "Existed_Raid", 00:19:44.589 "uuid": "2e401a69-dfa4-4dcd-842a-7bdb5e3b2185", 00:19:44.589 "strip_size_kb": 64, 00:19:44.589 "state": "online", 00:19:44.589 "raid_level": "raid5f", 00:19:44.589 "superblock": true, 00:19:44.589 "num_base_bdevs": 4, 00:19:44.589 "num_base_bdevs_discovered": 4, 00:19:44.589 "num_base_bdevs_operational": 4, 00:19:44.589 "base_bdevs_list": [ 00:19:44.589 { 00:19:44.589 "name": "BaseBdev1", 00:19:44.589 "uuid": "3dd3991e-c069-40fb-a8cd-edb72eecc80d", 00:19:44.589 "is_configured": true, 00:19:44.589 "data_offset": 2048, 00:19:44.589 "data_size": 63488 00:19:44.589 }, 00:19:44.589 { 00:19:44.589 "name": "BaseBdev2", 00:19:44.589 "uuid": "6620851c-bc75-43ed-aa04-b15277e00981", 00:19:44.589 "is_configured": true, 00:19:44.589 "data_offset": 2048, 00:19:44.589 "data_size": 63488 00:19:44.589 }, 00:19:44.589 { 00:19:44.589 "name": "BaseBdev3", 00:19:44.589 "uuid": "18c75446-59c5-4071-9dd7-28445f03954c", 00:19:44.589 "is_configured": true, 00:19:44.589 "data_offset": 2048, 00:19:44.589 "data_size": 63488 00:19:44.589 }, 00:19:44.589 { 00:19:44.589 "name": "BaseBdev4", 00:19:44.589 "uuid": "075f5846-cb4c-4d6d-9a63-14ac75e4681e", 00:19:44.589 "is_configured": true, 00:19:44.589 "data_offset": 2048, 00:19:44.589 "data_size": 63488 00:19:44.589 } 00:19:44.589 ] 00:19:44.589 }' 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.589 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.155 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:45.155 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:45.155 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:45.155 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:45.155 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:45.155 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:45.155 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:45.155 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:45.155 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.155 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.155 [2024-11-27 08:51:41.708565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:45.155 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.155 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:45.155 "name": "Existed_Raid", 00:19:45.155 "aliases": [ 00:19:45.155 "2e401a69-dfa4-4dcd-842a-7bdb5e3b2185" 00:19:45.155 ], 00:19:45.155 "product_name": "Raid Volume", 00:19:45.155 "block_size": 512, 00:19:45.155 "num_blocks": 190464, 00:19:45.155 "uuid": "2e401a69-dfa4-4dcd-842a-7bdb5e3b2185", 00:19:45.155 "assigned_rate_limits": { 00:19:45.155 "rw_ios_per_sec": 0, 00:19:45.155 "rw_mbytes_per_sec": 0, 00:19:45.155 "r_mbytes_per_sec": 0, 00:19:45.155 "w_mbytes_per_sec": 0 00:19:45.155 }, 00:19:45.155 "claimed": false, 00:19:45.155 "zoned": false, 00:19:45.155 "supported_io_types": { 00:19:45.155 "read": true, 00:19:45.155 "write": true, 00:19:45.155 "unmap": false, 00:19:45.155 "flush": false, 00:19:45.155 "reset": true, 00:19:45.155 "nvme_admin": false, 00:19:45.155 "nvme_io": false, 00:19:45.155 "nvme_io_md": false, 00:19:45.155 "write_zeroes": true, 00:19:45.155 "zcopy": false, 00:19:45.155 "get_zone_info": false, 00:19:45.155 "zone_management": false, 00:19:45.155 "zone_append": false, 00:19:45.155 "compare": false, 00:19:45.155 "compare_and_write": false, 00:19:45.155 "abort": false, 00:19:45.155 "seek_hole": false, 00:19:45.155 "seek_data": false, 00:19:45.155 "copy": false, 00:19:45.155 "nvme_iov_md": false 00:19:45.155 }, 00:19:45.155 "driver_specific": { 00:19:45.155 "raid": { 00:19:45.155 "uuid": "2e401a69-dfa4-4dcd-842a-7bdb5e3b2185", 00:19:45.155 "strip_size_kb": 64, 00:19:45.155 "state": "online", 00:19:45.155 "raid_level": "raid5f", 00:19:45.155 "superblock": true, 00:19:45.155 "num_base_bdevs": 4, 00:19:45.155 "num_base_bdevs_discovered": 4, 00:19:45.155 "num_base_bdevs_operational": 4, 00:19:45.155 "base_bdevs_list": [ 00:19:45.155 { 00:19:45.155 "name": "BaseBdev1", 00:19:45.155 "uuid": "3dd3991e-c069-40fb-a8cd-edb72eecc80d", 00:19:45.155 "is_configured": true, 00:19:45.155 "data_offset": 2048, 00:19:45.155 "data_size": 63488 00:19:45.155 }, 00:19:45.155 { 00:19:45.155 "name": "BaseBdev2", 00:19:45.155 "uuid": "6620851c-bc75-43ed-aa04-b15277e00981", 00:19:45.155 "is_configured": true, 00:19:45.155 "data_offset": 2048, 00:19:45.155 "data_size": 63488 00:19:45.155 }, 00:19:45.155 { 00:19:45.155 "name": "BaseBdev3", 00:19:45.155 "uuid": "18c75446-59c5-4071-9dd7-28445f03954c", 00:19:45.155 "is_configured": true, 00:19:45.156 "data_offset": 2048, 00:19:45.156 "data_size": 63488 00:19:45.156 }, 00:19:45.156 { 00:19:45.156 "name": "BaseBdev4", 00:19:45.156 "uuid": "075f5846-cb4c-4d6d-9a63-14ac75e4681e", 00:19:45.156 "is_configured": true, 00:19:45.156 "data_offset": 2048, 00:19:45.156 "data_size": 63488 00:19:45.156 } 00:19:45.156 ] 00:19:45.156 } 00:19:45.156 } 00:19:45.156 }' 00:19:45.156 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:45.156 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:45.156 BaseBdev2 00:19:45.156 BaseBdev3 00:19:45.156 BaseBdev4' 00:19:45.156 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.156 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:45.156 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.156 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:45.156 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.156 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.156 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.156 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.414 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.414 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.414 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.414 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.414 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:45.414 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.414 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.414 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.414 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.414 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.414 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.414 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:45.414 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.414 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.414 [2024-11-27 08:51:42.092416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.673 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.673 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:45.673 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:45.673 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:45.673 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:45.673 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:45.673 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:45.673 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:45.673 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.673 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:45.673 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:45.673 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:45.673 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.673 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.674 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.674 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.674 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.674 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.674 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.674 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.674 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.674 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.674 "name": "Existed_Raid", 00:19:45.674 "uuid": "2e401a69-dfa4-4dcd-842a-7bdb5e3b2185", 00:19:45.674 "strip_size_kb": 64, 00:19:45.674 "state": "online", 00:19:45.674 "raid_level": "raid5f", 00:19:45.674 "superblock": true, 00:19:45.674 "num_base_bdevs": 4, 00:19:45.674 "num_base_bdevs_discovered": 3, 00:19:45.674 "num_base_bdevs_operational": 3, 00:19:45.674 "base_bdevs_list": [ 00:19:45.674 { 00:19:45.674 "name": null, 00:19:45.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.674 "is_configured": false, 00:19:45.674 "data_offset": 0, 00:19:45.674 "data_size": 63488 00:19:45.674 }, 00:19:45.674 { 00:19:45.674 "name": "BaseBdev2", 00:19:45.674 "uuid": "6620851c-bc75-43ed-aa04-b15277e00981", 00:19:45.674 "is_configured": true, 00:19:45.674 "data_offset": 2048, 00:19:45.674 "data_size": 63488 00:19:45.674 }, 00:19:45.674 { 00:19:45.674 "name": "BaseBdev3", 00:19:45.674 "uuid": "18c75446-59c5-4071-9dd7-28445f03954c", 00:19:45.674 "is_configured": true, 00:19:45.674 "data_offset": 2048, 00:19:45.674 "data_size": 63488 00:19:45.674 }, 00:19:45.674 { 00:19:45.674 "name": "BaseBdev4", 00:19:45.674 "uuid": "075f5846-cb4c-4d6d-9a63-14ac75e4681e", 00:19:45.674 "is_configured": true, 00:19:45.674 "data_offset": 2048, 00:19:45.674 "data_size": 63488 00:19:45.674 } 00:19:45.674 ] 00:19:45.674 }' 00:19:45.674 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.674 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.240 [2024-11-27 08:51:42.742321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:46.240 [2024-11-27 08:51:42.742590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:46.240 [2024-11-27 08:51:42.830893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.240 [2024-11-27 08:51:42.890910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.240 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.499 [2024-11-27 08:51:43.045382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:46.499 [2024-11-27 08:51:43.045455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.499 BaseBdev2 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.499 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.759 [ 00:19:46.759 { 00:19:46.759 "name": "BaseBdev2", 00:19:46.759 "aliases": [ 00:19:46.759 "cfe2486e-d4a3-4c81-9779-46f4729ea58f" 00:19:46.759 ], 00:19:46.759 "product_name": "Malloc disk", 00:19:46.759 "block_size": 512, 00:19:46.759 "num_blocks": 65536, 00:19:46.759 "uuid": "cfe2486e-d4a3-4c81-9779-46f4729ea58f", 00:19:46.759 "assigned_rate_limits": { 00:19:46.759 "rw_ios_per_sec": 0, 00:19:46.759 "rw_mbytes_per_sec": 0, 00:19:46.759 "r_mbytes_per_sec": 0, 00:19:46.759 "w_mbytes_per_sec": 0 00:19:46.759 }, 00:19:46.759 "claimed": false, 00:19:46.759 "zoned": false, 00:19:46.759 "supported_io_types": { 00:19:46.759 "read": true, 00:19:46.759 "write": true, 00:19:46.759 "unmap": true, 00:19:46.759 "flush": true, 00:19:46.759 "reset": true, 00:19:46.759 "nvme_admin": false, 00:19:46.759 "nvme_io": false, 00:19:46.759 "nvme_io_md": false, 00:19:46.759 "write_zeroes": true, 00:19:46.759 "zcopy": true, 00:19:46.759 "get_zone_info": false, 00:19:46.759 "zone_management": false, 00:19:46.759 "zone_append": false, 00:19:46.759 "compare": false, 00:19:46.759 "compare_and_write": false, 00:19:46.759 "abort": true, 00:19:46.759 "seek_hole": false, 00:19:46.759 "seek_data": false, 00:19:46.759 "copy": true, 00:19:46.759 "nvme_iov_md": false 00:19:46.759 }, 00:19:46.759 "memory_domains": [ 00:19:46.759 { 00:19:46.759 "dma_device_id": "system", 00:19:46.759 "dma_device_type": 1 00:19:46.759 }, 00:19:46.759 { 00:19:46.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.759 "dma_device_type": 2 00:19:46.759 } 00:19:46.759 ], 00:19:46.759 "driver_specific": {} 00:19:46.759 } 00:19:46.759 ] 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.759 BaseBdev3 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev3 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.759 [ 00:19:46.759 { 00:19:46.759 "name": "BaseBdev3", 00:19:46.759 "aliases": [ 00:19:46.759 "4f65c923-b3b9-45eb-a49e-b6cbbe93d41c" 00:19:46.759 ], 00:19:46.759 "product_name": "Malloc disk", 00:19:46.759 "block_size": 512, 00:19:46.759 "num_blocks": 65536, 00:19:46.759 "uuid": "4f65c923-b3b9-45eb-a49e-b6cbbe93d41c", 00:19:46.759 "assigned_rate_limits": { 00:19:46.759 "rw_ios_per_sec": 0, 00:19:46.759 "rw_mbytes_per_sec": 0, 00:19:46.759 "r_mbytes_per_sec": 0, 00:19:46.759 "w_mbytes_per_sec": 0 00:19:46.759 }, 00:19:46.759 "claimed": false, 00:19:46.759 "zoned": false, 00:19:46.759 "supported_io_types": { 00:19:46.759 "read": true, 00:19:46.759 "write": true, 00:19:46.759 "unmap": true, 00:19:46.759 "flush": true, 00:19:46.759 "reset": true, 00:19:46.759 "nvme_admin": false, 00:19:46.759 "nvme_io": false, 00:19:46.759 "nvme_io_md": false, 00:19:46.759 "write_zeroes": true, 00:19:46.759 "zcopy": true, 00:19:46.759 "get_zone_info": false, 00:19:46.759 "zone_management": false, 00:19:46.759 "zone_append": false, 00:19:46.759 "compare": false, 00:19:46.759 "compare_and_write": false, 00:19:46.759 "abort": true, 00:19:46.759 "seek_hole": false, 00:19:46.759 "seek_data": false, 00:19:46.759 "copy": true, 00:19:46.759 "nvme_iov_md": false 00:19:46.759 }, 00:19:46.759 "memory_domains": [ 00:19:46.759 { 00:19:46.759 "dma_device_id": "system", 00:19:46.759 "dma_device_type": 1 00:19:46.759 }, 00:19:46.759 { 00:19:46.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.759 "dma_device_type": 2 00:19:46.759 } 00:19:46.759 ], 00:19:46.759 "driver_specific": {} 00:19:46.759 } 00:19:46.759 ] 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.759 BaseBdev4 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev4 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.759 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.759 [ 00:19:46.759 { 00:19:46.759 "name": "BaseBdev4", 00:19:46.759 "aliases": [ 00:19:46.759 "0a9a9773-5a15-4e11-89c0-99e62e7d4b3e" 00:19:46.759 ], 00:19:46.759 "product_name": "Malloc disk", 00:19:46.759 "block_size": 512, 00:19:46.759 "num_blocks": 65536, 00:19:46.759 "uuid": "0a9a9773-5a15-4e11-89c0-99e62e7d4b3e", 00:19:46.759 "assigned_rate_limits": { 00:19:46.759 "rw_ios_per_sec": 0, 00:19:46.759 "rw_mbytes_per_sec": 0, 00:19:46.759 "r_mbytes_per_sec": 0, 00:19:46.759 "w_mbytes_per_sec": 0 00:19:46.759 }, 00:19:46.759 "claimed": false, 00:19:46.759 "zoned": false, 00:19:46.759 "supported_io_types": { 00:19:46.759 "read": true, 00:19:46.759 "write": true, 00:19:46.759 "unmap": true, 00:19:46.759 "flush": true, 00:19:46.759 "reset": true, 00:19:46.759 "nvme_admin": false, 00:19:46.759 "nvme_io": false, 00:19:46.759 "nvme_io_md": false, 00:19:46.759 "write_zeroes": true, 00:19:46.759 "zcopy": true, 00:19:46.759 "get_zone_info": false, 00:19:46.759 "zone_management": false, 00:19:46.759 "zone_append": false, 00:19:46.759 "compare": false, 00:19:46.759 "compare_and_write": false, 00:19:46.759 "abort": true, 00:19:46.759 "seek_hole": false, 00:19:46.759 "seek_data": false, 00:19:46.759 "copy": true, 00:19:46.759 "nvme_iov_md": false 00:19:46.759 }, 00:19:46.759 "memory_domains": [ 00:19:46.759 { 00:19:46.759 "dma_device_id": "system", 00:19:46.759 "dma_device_type": 1 00:19:46.760 }, 00:19:46.760 { 00:19:46.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.760 "dma_device_type": 2 00:19:46.760 } 00:19:46.760 ], 00:19:46.760 "driver_specific": {} 00:19:46.760 } 00:19:46.760 ] 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.760 [2024-11-27 08:51:43.437763] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:46.760 [2024-11-27 08:51:43.437959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:46.760 [2024-11-27 08:51:43.438131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.760 [2024-11-27 08:51:43.440839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:46.760 [2024-11-27 08:51:43.441050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.760 "name": "Existed_Raid", 00:19:46.760 "uuid": "e4a5ace3-a4c1-4798-a91a-cb66e849f6dd", 00:19:46.760 "strip_size_kb": 64, 00:19:46.760 "state": "configuring", 00:19:46.760 "raid_level": "raid5f", 00:19:46.760 "superblock": true, 00:19:46.760 "num_base_bdevs": 4, 00:19:46.760 "num_base_bdevs_discovered": 3, 00:19:46.760 "num_base_bdevs_operational": 4, 00:19:46.760 "base_bdevs_list": [ 00:19:46.760 { 00:19:46.760 "name": "BaseBdev1", 00:19:46.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.760 "is_configured": false, 00:19:46.760 "data_offset": 0, 00:19:46.760 "data_size": 0 00:19:46.760 }, 00:19:46.760 { 00:19:46.760 "name": "BaseBdev2", 00:19:46.760 "uuid": "cfe2486e-d4a3-4c81-9779-46f4729ea58f", 00:19:46.760 "is_configured": true, 00:19:46.760 "data_offset": 2048, 00:19:46.760 "data_size": 63488 00:19:46.760 }, 00:19:46.760 { 00:19:46.760 "name": "BaseBdev3", 00:19:46.760 "uuid": "4f65c923-b3b9-45eb-a49e-b6cbbe93d41c", 00:19:46.760 "is_configured": true, 00:19:46.760 "data_offset": 2048, 00:19:46.760 "data_size": 63488 00:19:46.760 }, 00:19:46.760 { 00:19:46.760 "name": "BaseBdev4", 00:19:46.760 "uuid": "0a9a9773-5a15-4e11-89c0-99e62e7d4b3e", 00:19:46.760 "is_configured": true, 00:19:46.760 "data_offset": 2048, 00:19:46.760 "data_size": 63488 00:19:46.760 } 00:19:46.760 ] 00:19:46.760 }' 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.760 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.335 [2024-11-27 08:51:43.961933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.335 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.335 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.335 "name": "Existed_Raid", 00:19:47.335 "uuid": "e4a5ace3-a4c1-4798-a91a-cb66e849f6dd", 00:19:47.335 "strip_size_kb": 64, 00:19:47.335 "state": "configuring", 00:19:47.335 "raid_level": "raid5f", 00:19:47.335 "superblock": true, 00:19:47.335 "num_base_bdevs": 4, 00:19:47.335 "num_base_bdevs_discovered": 2, 00:19:47.335 "num_base_bdevs_operational": 4, 00:19:47.335 "base_bdevs_list": [ 00:19:47.335 { 00:19:47.335 "name": "BaseBdev1", 00:19:47.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.335 "is_configured": false, 00:19:47.335 "data_offset": 0, 00:19:47.335 "data_size": 0 00:19:47.335 }, 00:19:47.335 { 00:19:47.335 "name": null, 00:19:47.335 "uuid": "cfe2486e-d4a3-4c81-9779-46f4729ea58f", 00:19:47.335 "is_configured": false, 00:19:47.335 "data_offset": 0, 00:19:47.335 "data_size": 63488 00:19:47.335 }, 00:19:47.335 { 00:19:47.335 "name": "BaseBdev3", 00:19:47.336 "uuid": "4f65c923-b3b9-45eb-a49e-b6cbbe93d41c", 00:19:47.336 "is_configured": true, 00:19:47.336 "data_offset": 2048, 00:19:47.336 "data_size": 63488 00:19:47.336 }, 00:19:47.336 { 00:19:47.336 "name": "BaseBdev4", 00:19:47.336 "uuid": "0a9a9773-5a15-4e11-89c0-99e62e7d4b3e", 00:19:47.336 "is_configured": true, 00:19:47.336 "data_offset": 2048, 00:19:47.336 "data_size": 63488 00:19:47.336 } 00:19:47.336 ] 00:19:47.336 }' 00:19:47.336 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.336 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.910 [2024-11-27 08:51:44.567712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.910 BaseBdev1 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.910 [ 00:19:47.910 { 00:19:47.910 "name": "BaseBdev1", 00:19:47.910 "aliases": [ 00:19:47.910 "46655f1c-61d6-4d49-8e60-80975150fd34" 00:19:47.910 ], 00:19:47.910 "product_name": "Malloc disk", 00:19:47.910 "block_size": 512, 00:19:47.910 "num_blocks": 65536, 00:19:47.910 "uuid": "46655f1c-61d6-4d49-8e60-80975150fd34", 00:19:47.910 "assigned_rate_limits": { 00:19:47.910 "rw_ios_per_sec": 0, 00:19:47.910 "rw_mbytes_per_sec": 0, 00:19:47.910 "r_mbytes_per_sec": 0, 00:19:47.910 "w_mbytes_per_sec": 0 00:19:47.910 }, 00:19:47.910 "claimed": true, 00:19:47.910 "claim_type": "exclusive_write", 00:19:47.910 "zoned": false, 00:19:47.910 "supported_io_types": { 00:19:47.910 "read": true, 00:19:47.910 "write": true, 00:19:47.910 "unmap": true, 00:19:47.910 "flush": true, 00:19:47.910 "reset": true, 00:19:47.910 "nvme_admin": false, 00:19:47.910 "nvme_io": false, 00:19:47.910 "nvme_io_md": false, 00:19:47.910 "write_zeroes": true, 00:19:47.910 "zcopy": true, 00:19:47.910 "get_zone_info": false, 00:19:47.910 "zone_management": false, 00:19:47.910 "zone_append": false, 00:19:47.910 "compare": false, 00:19:47.910 "compare_and_write": false, 00:19:47.910 "abort": true, 00:19:47.910 "seek_hole": false, 00:19:47.910 "seek_data": false, 00:19:47.910 "copy": true, 00:19:47.910 "nvme_iov_md": false 00:19:47.910 }, 00:19:47.910 "memory_domains": [ 00:19:47.910 { 00:19:47.910 "dma_device_id": "system", 00:19:47.910 "dma_device_type": 1 00:19:47.910 }, 00:19:47.910 { 00:19:47.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.910 "dma_device_type": 2 00:19:47.910 } 00:19:47.910 ], 00:19:47.910 "driver_specific": {} 00:19:47.910 } 00:19:47.910 ] 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.910 "name": "Existed_Raid", 00:19:47.910 "uuid": "e4a5ace3-a4c1-4798-a91a-cb66e849f6dd", 00:19:47.910 "strip_size_kb": 64, 00:19:47.910 "state": "configuring", 00:19:47.910 "raid_level": "raid5f", 00:19:47.910 "superblock": true, 00:19:47.910 "num_base_bdevs": 4, 00:19:47.910 "num_base_bdevs_discovered": 3, 00:19:47.910 "num_base_bdevs_operational": 4, 00:19:47.910 "base_bdevs_list": [ 00:19:47.910 { 00:19:47.910 "name": "BaseBdev1", 00:19:47.910 "uuid": "46655f1c-61d6-4d49-8e60-80975150fd34", 00:19:47.910 "is_configured": true, 00:19:47.910 "data_offset": 2048, 00:19:47.910 "data_size": 63488 00:19:47.910 }, 00:19:47.910 { 00:19:47.910 "name": null, 00:19:47.910 "uuid": "cfe2486e-d4a3-4c81-9779-46f4729ea58f", 00:19:47.910 "is_configured": false, 00:19:47.910 "data_offset": 0, 00:19:47.910 "data_size": 63488 00:19:47.910 }, 00:19:47.910 { 00:19:47.910 "name": "BaseBdev3", 00:19:47.910 "uuid": "4f65c923-b3b9-45eb-a49e-b6cbbe93d41c", 00:19:47.910 "is_configured": true, 00:19:47.910 "data_offset": 2048, 00:19:47.910 "data_size": 63488 00:19:47.910 }, 00:19:47.910 { 00:19:47.910 "name": "BaseBdev4", 00:19:47.910 "uuid": "0a9a9773-5a15-4e11-89c0-99e62e7d4b3e", 00:19:47.910 "is_configured": true, 00:19:47.910 "data_offset": 2048, 00:19:47.910 "data_size": 63488 00:19:47.910 } 00:19:47.910 ] 00:19:47.910 }' 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.910 08:51:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 [2024-11-27 08:51:45.175952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.735 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.735 "name": "Existed_Raid", 00:19:48.735 "uuid": "e4a5ace3-a4c1-4798-a91a-cb66e849f6dd", 00:19:48.735 "strip_size_kb": 64, 00:19:48.735 "state": "configuring", 00:19:48.735 "raid_level": "raid5f", 00:19:48.735 "superblock": true, 00:19:48.735 "num_base_bdevs": 4, 00:19:48.735 "num_base_bdevs_discovered": 2, 00:19:48.735 "num_base_bdevs_operational": 4, 00:19:48.735 "base_bdevs_list": [ 00:19:48.735 { 00:19:48.735 "name": "BaseBdev1", 00:19:48.735 "uuid": "46655f1c-61d6-4d49-8e60-80975150fd34", 00:19:48.735 "is_configured": true, 00:19:48.735 "data_offset": 2048, 00:19:48.735 "data_size": 63488 00:19:48.735 }, 00:19:48.735 { 00:19:48.735 "name": null, 00:19:48.735 "uuid": "cfe2486e-d4a3-4c81-9779-46f4729ea58f", 00:19:48.735 "is_configured": false, 00:19:48.735 "data_offset": 0, 00:19:48.735 "data_size": 63488 00:19:48.735 }, 00:19:48.735 { 00:19:48.735 "name": null, 00:19:48.735 "uuid": "4f65c923-b3b9-45eb-a49e-b6cbbe93d41c", 00:19:48.735 "is_configured": false, 00:19:48.735 "data_offset": 0, 00:19:48.735 "data_size": 63488 00:19:48.735 }, 00:19:48.735 { 00:19:48.735 "name": "BaseBdev4", 00:19:48.735 "uuid": "0a9a9773-5a15-4e11-89c0-99e62e7d4b3e", 00:19:48.735 "is_configured": true, 00:19:48.735 "data_offset": 2048, 00:19:48.735 "data_size": 63488 00:19:48.735 } 00:19:48.735 ] 00:19:48.735 }' 00:19:48.735 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.735 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.994 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.994 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:48.994 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.994 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.994 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.252 [2024-11-27 08:51:45.772102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.252 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.252 "name": "Existed_Raid", 00:19:49.252 "uuid": "e4a5ace3-a4c1-4798-a91a-cb66e849f6dd", 00:19:49.252 "strip_size_kb": 64, 00:19:49.252 "state": "configuring", 00:19:49.252 "raid_level": "raid5f", 00:19:49.252 "superblock": true, 00:19:49.252 "num_base_bdevs": 4, 00:19:49.252 "num_base_bdevs_discovered": 3, 00:19:49.252 "num_base_bdevs_operational": 4, 00:19:49.252 "base_bdevs_list": [ 00:19:49.252 { 00:19:49.252 "name": "BaseBdev1", 00:19:49.252 "uuid": "46655f1c-61d6-4d49-8e60-80975150fd34", 00:19:49.252 "is_configured": true, 00:19:49.252 "data_offset": 2048, 00:19:49.252 "data_size": 63488 00:19:49.252 }, 00:19:49.252 { 00:19:49.252 "name": null, 00:19:49.252 "uuid": "cfe2486e-d4a3-4c81-9779-46f4729ea58f", 00:19:49.252 "is_configured": false, 00:19:49.252 "data_offset": 0, 00:19:49.252 "data_size": 63488 00:19:49.252 }, 00:19:49.252 { 00:19:49.252 "name": "BaseBdev3", 00:19:49.252 "uuid": "4f65c923-b3b9-45eb-a49e-b6cbbe93d41c", 00:19:49.252 "is_configured": true, 00:19:49.252 "data_offset": 2048, 00:19:49.252 "data_size": 63488 00:19:49.252 }, 00:19:49.252 { 00:19:49.252 "name": "BaseBdev4", 00:19:49.252 "uuid": "0a9a9773-5a15-4e11-89c0-99e62e7d4b3e", 00:19:49.252 "is_configured": true, 00:19:49.252 "data_offset": 2048, 00:19:49.253 "data_size": 63488 00:19:49.253 } 00:19:49.253 ] 00:19:49.253 }' 00:19:49.253 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.253 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.819 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.819 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:49.819 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.819 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.819 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.819 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:49.819 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:49.819 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.819 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.819 [2024-11-27 08:51:46.316287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:49.819 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.819 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.820 "name": "Existed_Raid", 00:19:49.820 "uuid": "e4a5ace3-a4c1-4798-a91a-cb66e849f6dd", 00:19:49.820 "strip_size_kb": 64, 00:19:49.820 "state": "configuring", 00:19:49.820 "raid_level": "raid5f", 00:19:49.820 "superblock": true, 00:19:49.820 "num_base_bdevs": 4, 00:19:49.820 "num_base_bdevs_discovered": 2, 00:19:49.820 "num_base_bdevs_operational": 4, 00:19:49.820 "base_bdevs_list": [ 00:19:49.820 { 00:19:49.820 "name": null, 00:19:49.820 "uuid": "46655f1c-61d6-4d49-8e60-80975150fd34", 00:19:49.820 "is_configured": false, 00:19:49.820 "data_offset": 0, 00:19:49.820 "data_size": 63488 00:19:49.820 }, 00:19:49.820 { 00:19:49.820 "name": null, 00:19:49.820 "uuid": "cfe2486e-d4a3-4c81-9779-46f4729ea58f", 00:19:49.820 "is_configured": false, 00:19:49.820 "data_offset": 0, 00:19:49.820 "data_size": 63488 00:19:49.820 }, 00:19:49.820 { 00:19:49.820 "name": "BaseBdev3", 00:19:49.820 "uuid": "4f65c923-b3b9-45eb-a49e-b6cbbe93d41c", 00:19:49.820 "is_configured": true, 00:19:49.820 "data_offset": 2048, 00:19:49.820 "data_size": 63488 00:19:49.820 }, 00:19:49.820 { 00:19:49.820 "name": "BaseBdev4", 00:19:49.820 "uuid": "0a9a9773-5a15-4e11-89c0-99e62e7d4b3e", 00:19:49.820 "is_configured": true, 00:19:49.820 "data_offset": 2048, 00:19:49.820 "data_size": 63488 00:19:49.820 } 00:19:49.820 ] 00:19:49.820 }' 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.820 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.387 [2024-11-27 08:51:46.981032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.387 08:51:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.387 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.387 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.387 "name": "Existed_Raid", 00:19:50.387 "uuid": "e4a5ace3-a4c1-4798-a91a-cb66e849f6dd", 00:19:50.387 "strip_size_kb": 64, 00:19:50.387 "state": "configuring", 00:19:50.387 "raid_level": "raid5f", 00:19:50.387 "superblock": true, 00:19:50.387 "num_base_bdevs": 4, 00:19:50.387 "num_base_bdevs_discovered": 3, 00:19:50.387 "num_base_bdevs_operational": 4, 00:19:50.387 "base_bdevs_list": [ 00:19:50.387 { 00:19:50.387 "name": null, 00:19:50.387 "uuid": "46655f1c-61d6-4d49-8e60-80975150fd34", 00:19:50.387 "is_configured": false, 00:19:50.387 "data_offset": 0, 00:19:50.387 "data_size": 63488 00:19:50.387 }, 00:19:50.387 { 00:19:50.387 "name": "BaseBdev2", 00:19:50.387 "uuid": "cfe2486e-d4a3-4c81-9779-46f4729ea58f", 00:19:50.387 "is_configured": true, 00:19:50.387 "data_offset": 2048, 00:19:50.387 "data_size": 63488 00:19:50.387 }, 00:19:50.387 { 00:19:50.387 "name": "BaseBdev3", 00:19:50.387 "uuid": "4f65c923-b3b9-45eb-a49e-b6cbbe93d41c", 00:19:50.387 "is_configured": true, 00:19:50.387 "data_offset": 2048, 00:19:50.387 "data_size": 63488 00:19:50.387 }, 00:19:50.387 { 00:19:50.387 "name": "BaseBdev4", 00:19:50.388 "uuid": "0a9a9773-5a15-4e11-89c0-99e62e7d4b3e", 00:19:50.388 "is_configured": true, 00:19:50.388 "data_offset": 2048, 00:19:50.388 "data_size": 63488 00:19:50.388 } 00:19:50.388 ] 00:19:50.388 }' 00:19:50.388 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.388 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 46655f1c-61d6-4d49-8e60-80975150fd34 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 [2024-11-27 08:51:47.646671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:50.954 [2024-11-27 08:51:47.647007] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:50.954 [2024-11-27 08:51:47.647026] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:50.954 NewBaseBdev 00:19:50.954 [2024-11-27 08:51:47.647371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_name=NewBaseBdev 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local i 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 [2024-11-27 08:51:47.653905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:50.954 [2024-11-27 08:51:47.654071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:50.954 [2024-11-27 08:51:47.654425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.954 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.954 [ 00:19:50.954 { 00:19:50.954 "name": "NewBaseBdev", 00:19:50.954 "aliases": [ 00:19:50.955 "46655f1c-61d6-4d49-8e60-80975150fd34" 00:19:50.955 ], 00:19:50.955 "product_name": "Malloc disk", 00:19:50.955 "block_size": 512, 00:19:50.955 "num_blocks": 65536, 00:19:50.955 "uuid": "46655f1c-61d6-4d49-8e60-80975150fd34", 00:19:50.955 "assigned_rate_limits": { 00:19:50.955 "rw_ios_per_sec": 0, 00:19:50.955 "rw_mbytes_per_sec": 0, 00:19:50.955 "r_mbytes_per_sec": 0, 00:19:50.955 "w_mbytes_per_sec": 0 00:19:50.955 }, 00:19:50.955 "claimed": true, 00:19:50.955 "claim_type": "exclusive_write", 00:19:50.955 "zoned": false, 00:19:50.955 "supported_io_types": { 00:19:50.955 "read": true, 00:19:50.955 "write": true, 00:19:50.955 "unmap": true, 00:19:50.955 "flush": true, 00:19:50.955 "reset": true, 00:19:50.955 "nvme_admin": false, 00:19:50.955 "nvme_io": false, 00:19:50.955 "nvme_io_md": false, 00:19:50.955 "write_zeroes": true, 00:19:50.955 "zcopy": true, 00:19:50.955 "get_zone_info": false, 00:19:50.955 "zone_management": false, 00:19:50.955 "zone_append": false, 00:19:50.955 "compare": false, 00:19:50.955 "compare_and_write": false, 00:19:50.955 "abort": true, 00:19:50.955 "seek_hole": false, 00:19:50.955 "seek_data": false, 00:19:50.955 "copy": true, 00:19:50.955 "nvme_iov_md": false 00:19:50.955 }, 00:19:50.955 "memory_domains": [ 00:19:50.955 { 00:19:50.955 "dma_device_id": "system", 00:19:50.955 "dma_device_type": 1 00:19:50.955 }, 00:19:50.955 { 00:19:50.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.955 "dma_device_type": 2 00:19:50.955 } 00:19:50.955 ], 00:19:50.955 "driver_specific": {} 00:19:50.955 } 00:19:50.955 ] 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # return 0 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.955 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.214 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.214 "name": "Existed_Raid", 00:19:51.214 "uuid": "e4a5ace3-a4c1-4798-a91a-cb66e849f6dd", 00:19:51.214 "strip_size_kb": 64, 00:19:51.214 "state": "online", 00:19:51.214 "raid_level": "raid5f", 00:19:51.215 "superblock": true, 00:19:51.215 "num_base_bdevs": 4, 00:19:51.215 "num_base_bdevs_discovered": 4, 00:19:51.215 "num_base_bdevs_operational": 4, 00:19:51.215 "base_bdevs_list": [ 00:19:51.215 { 00:19:51.215 "name": "NewBaseBdev", 00:19:51.215 "uuid": "46655f1c-61d6-4d49-8e60-80975150fd34", 00:19:51.215 "is_configured": true, 00:19:51.215 "data_offset": 2048, 00:19:51.215 "data_size": 63488 00:19:51.215 }, 00:19:51.215 { 00:19:51.215 "name": "BaseBdev2", 00:19:51.215 "uuid": "cfe2486e-d4a3-4c81-9779-46f4729ea58f", 00:19:51.215 "is_configured": true, 00:19:51.215 "data_offset": 2048, 00:19:51.215 "data_size": 63488 00:19:51.215 }, 00:19:51.215 { 00:19:51.215 "name": "BaseBdev3", 00:19:51.215 "uuid": "4f65c923-b3b9-45eb-a49e-b6cbbe93d41c", 00:19:51.215 "is_configured": true, 00:19:51.215 "data_offset": 2048, 00:19:51.215 "data_size": 63488 00:19:51.215 }, 00:19:51.215 { 00:19:51.215 "name": "BaseBdev4", 00:19:51.215 "uuid": "0a9a9773-5a15-4e11-89c0-99e62e7d4b3e", 00:19:51.215 "is_configured": true, 00:19:51.215 "data_offset": 2048, 00:19:51.215 "data_size": 63488 00:19:51.215 } 00:19:51.215 ] 00:19:51.215 }' 00:19:51.215 08:51:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.215 08:51:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.473 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:51.473 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:51.473 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:51.473 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:51.473 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:51.473 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:51.473 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:51.473 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.473 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:51.474 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.474 [2024-11-27 08:51:48.210816] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:51.474 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.732 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:51.732 "name": "Existed_Raid", 00:19:51.732 "aliases": [ 00:19:51.732 "e4a5ace3-a4c1-4798-a91a-cb66e849f6dd" 00:19:51.732 ], 00:19:51.732 "product_name": "Raid Volume", 00:19:51.732 "block_size": 512, 00:19:51.732 "num_blocks": 190464, 00:19:51.732 "uuid": "e4a5ace3-a4c1-4798-a91a-cb66e849f6dd", 00:19:51.732 "assigned_rate_limits": { 00:19:51.732 "rw_ios_per_sec": 0, 00:19:51.732 "rw_mbytes_per_sec": 0, 00:19:51.732 "r_mbytes_per_sec": 0, 00:19:51.732 "w_mbytes_per_sec": 0 00:19:51.732 }, 00:19:51.732 "claimed": false, 00:19:51.732 "zoned": false, 00:19:51.732 "supported_io_types": { 00:19:51.732 "read": true, 00:19:51.732 "write": true, 00:19:51.732 "unmap": false, 00:19:51.732 "flush": false, 00:19:51.732 "reset": true, 00:19:51.732 "nvme_admin": false, 00:19:51.732 "nvme_io": false, 00:19:51.732 "nvme_io_md": false, 00:19:51.732 "write_zeroes": true, 00:19:51.732 "zcopy": false, 00:19:51.732 "get_zone_info": false, 00:19:51.732 "zone_management": false, 00:19:51.732 "zone_append": false, 00:19:51.732 "compare": false, 00:19:51.732 "compare_and_write": false, 00:19:51.732 "abort": false, 00:19:51.732 "seek_hole": false, 00:19:51.732 "seek_data": false, 00:19:51.732 "copy": false, 00:19:51.732 "nvme_iov_md": false 00:19:51.733 }, 00:19:51.733 "driver_specific": { 00:19:51.733 "raid": { 00:19:51.733 "uuid": "e4a5ace3-a4c1-4798-a91a-cb66e849f6dd", 00:19:51.733 "strip_size_kb": 64, 00:19:51.733 "state": "online", 00:19:51.733 "raid_level": "raid5f", 00:19:51.733 "superblock": true, 00:19:51.733 "num_base_bdevs": 4, 00:19:51.733 "num_base_bdevs_discovered": 4, 00:19:51.733 "num_base_bdevs_operational": 4, 00:19:51.733 "base_bdevs_list": [ 00:19:51.733 { 00:19:51.733 "name": "NewBaseBdev", 00:19:51.733 "uuid": "46655f1c-61d6-4d49-8e60-80975150fd34", 00:19:51.733 "is_configured": true, 00:19:51.733 "data_offset": 2048, 00:19:51.733 "data_size": 63488 00:19:51.733 }, 00:19:51.733 { 00:19:51.733 "name": "BaseBdev2", 00:19:51.733 "uuid": "cfe2486e-d4a3-4c81-9779-46f4729ea58f", 00:19:51.733 "is_configured": true, 00:19:51.733 "data_offset": 2048, 00:19:51.733 "data_size": 63488 00:19:51.733 }, 00:19:51.733 { 00:19:51.733 "name": "BaseBdev3", 00:19:51.733 "uuid": "4f65c923-b3b9-45eb-a49e-b6cbbe93d41c", 00:19:51.733 "is_configured": true, 00:19:51.733 "data_offset": 2048, 00:19:51.733 "data_size": 63488 00:19:51.733 }, 00:19:51.733 { 00:19:51.733 "name": "BaseBdev4", 00:19:51.733 "uuid": "0a9a9773-5a15-4e11-89c0-99e62e7d4b3e", 00:19:51.733 "is_configured": true, 00:19:51.733 "data_offset": 2048, 00:19:51.733 "data_size": 63488 00:19:51.733 } 00:19:51.733 ] 00:19:51.733 } 00:19:51.733 } 00:19:51.733 }' 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:51.733 BaseBdev2 00:19:51.733 BaseBdev3 00:19:51.733 BaseBdev4' 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.733 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.991 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:51.991 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:51.991 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:51.991 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:51.991 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.991 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.991 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:51.991 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.991 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:51.991 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:51.991 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:51.991 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.991 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.991 [2024-11-27 08:51:48.570566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:51.991 [2024-11-27 08:51:48.570608] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:51.991 [2024-11-27 08:51:48.570702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:51.991 [2024-11-27 08:51:48.571114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:51.991 [2024-11-27 08:51:48.571132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:51.991 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.992 08:51:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83971 00:19:51.992 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' -z 83971 ']' 00:19:51.992 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # kill -0 83971 00:19:51.992 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # uname 00:19:51.992 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:19:51.992 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 83971 00:19:51.992 killing process with pid 83971 00:19:51.992 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:19:51.992 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:19:51.992 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # echo 'killing process with pid 83971' 00:19:51.992 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # kill 83971 00:19:51.992 [2024-11-27 08:51:48.611058] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:51.992 08:51:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@975 -- # wait 83971 00:19:52.314 [2024-11-27 08:51:48.977770] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:53.696 08:51:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:53.696 00:19:53.696 real 0m12.836s 00:19:53.696 user 0m21.088s 00:19:53.696 sys 0m1.942s 00:19:53.696 08:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # xtrace_disable 00:19:53.696 ************************************ 00:19:53.696 END TEST raid5f_state_function_test_sb 00:19:53.696 ************************************ 00:19:53.696 08:51:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.696 08:51:50 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:19:53.696 08:51:50 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:19:53.696 08:51:50 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:19:53.696 08:51:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:53.696 ************************************ 00:19:53.696 START TEST raid5f_superblock_test 00:19:53.696 ************************************ 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # raid_superblock_test raid5f 4 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84653 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84653 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@832 -- # '[' -z 84653 ']' 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:19:53.696 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.696 [2024-11-27 08:51:50.250240] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:19:53.696 [2024-11-27 08:51:50.250449] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84653 ] 00:19:53.696 [2024-11-27 08:51:50.428415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.956 [2024-11-27 08:51:50.575574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.214 [2024-11-27 08:51:50.800219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.214 [2024-11-27 08:51:50.800266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@865 -- # return 0 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 malloc1 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 [2024-11-27 08:51:51.293858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:54.782 [2024-11-27 08:51:51.294077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.782 [2024-11-27 08:51:51.294157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:54.782 [2024-11-27 08:51:51.294282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.782 [2024-11-27 08:51:51.297260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.782 [2024-11-27 08:51:51.297433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:54.782 pt1 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 malloc2 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 [2024-11-27 08:51:51.354170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:54.782 [2024-11-27 08:51:51.354261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.782 [2024-11-27 08:51:51.354296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:54.782 [2024-11-27 08:51:51.354312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.782 [2024-11-27 08:51:51.357263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.782 [2024-11-27 08:51:51.357452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:54.782 pt2 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 malloc3 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 [2024-11-27 08:51:51.425130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:54.782 [2024-11-27 08:51:51.425217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.782 [2024-11-27 08:51:51.425252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:54.782 [2024-11-27 08:51:51.425268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.782 [2024-11-27 08:51:51.428277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.782 [2024-11-27 08:51:51.428329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:54.782 pt3 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 malloc4 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.782 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 [2024-11-27 08:51:51.485244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:54.782 [2024-11-27 08:51:51.485332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.782 [2024-11-27 08:51:51.485395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:54.783 [2024-11-27 08:51:51.485415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.783 [2024-11-27 08:51:51.488417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.783 [2024-11-27 08:51:51.488462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:54.783 pt4 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.783 [2024-11-27 08:51:51.493374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:54.783 [2024-11-27 08:51:51.496169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:54.783 [2024-11-27 08:51:51.496439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:54.783 [2024-11-27 08:51:51.496721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:54.783 [2024-11-27 08:51:51.497105] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:54.783 [2024-11-27 08:51:51.497240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:54.783 [2024-11-27 08:51:51.497622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:54.783 [2024-11-27 08:51:51.504661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:54.783 [2024-11-27 08:51:51.504817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:54.783 [2024-11-27 08:51:51.505179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.783 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.041 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.041 "name": "raid_bdev1", 00:19:55.041 "uuid": "c60fde31-731d-447c-8662-88793b03812a", 00:19:55.041 "strip_size_kb": 64, 00:19:55.041 "state": "online", 00:19:55.041 "raid_level": "raid5f", 00:19:55.041 "superblock": true, 00:19:55.041 "num_base_bdevs": 4, 00:19:55.041 "num_base_bdevs_discovered": 4, 00:19:55.041 "num_base_bdevs_operational": 4, 00:19:55.041 "base_bdevs_list": [ 00:19:55.041 { 00:19:55.041 "name": "pt1", 00:19:55.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:55.041 "is_configured": true, 00:19:55.041 "data_offset": 2048, 00:19:55.041 "data_size": 63488 00:19:55.041 }, 00:19:55.041 { 00:19:55.041 "name": "pt2", 00:19:55.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:55.041 "is_configured": true, 00:19:55.041 "data_offset": 2048, 00:19:55.041 "data_size": 63488 00:19:55.041 }, 00:19:55.041 { 00:19:55.041 "name": "pt3", 00:19:55.041 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:55.041 "is_configured": true, 00:19:55.041 "data_offset": 2048, 00:19:55.041 "data_size": 63488 00:19:55.041 }, 00:19:55.041 { 00:19:55.041 "name": "pt4", 00:19:55.041 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:55.041 "is_configured": true, 00:19:55.041 "data_offset": 2048, 00:19:55.041 "data_size": 63488 00:19:55.041 } 00:19:55.041 ] 00:19:55.041 }' 00:19:55.041 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.041 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.300 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:55.300 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:55.300 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:55.300 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:55.300 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:55.300 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:55.300 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:55.300 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:55.300 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.300 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.300 [2024-11-27 08:51:51.993621] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:55.300 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.300 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:55.300 "name": "raid_bdev1", 00:19:55.300 "aliases": [ 00:19:55.300 "c60fde31-731d-447c-8662-88793b03812a" 00:19:55.300 ], 00:19:55.300 "product_name": "Raid Volume", 00:19:55.300 "block_size": 512, 00:19:55.300 "num_blocks": 190464, 00:19:55.300 "uuid": "c60fde31-731d-447c-8662-88793b03812a", 00:19:55.300 "assigned_rate_limits": { 00:19:55.300 "rw_ios_per_sec": 0, 00:19:55.300 "rw_mbytes_per_sec": 0, 00:19:55.300 "r_mbytes_per_sec": 0, 00:19:55.300 "w_mbytes_per_sec": 0 00:19:55.300 }, 00:19:55.300 "claimed": false, 00:19:55.300 "zoned": false, 00:19:55.300 "supported_io_types": { 00:19:55.300 "read": true, 00:19:55.300 "write": true, 00:19:55.300 "unmap": false, 00:19:55.300 "flush": false, 00:19:55.300 "reset": true, 00:19:55.300 "nvme_admin": false, 00:19:55.300 "nvme_io": false, 00:19:55.300 "nvme_io_md": false, 00:19:55.300 "write_zeroes": true, 00:19:55.300 "zcopy": false, 00:19:55.300 "get_zone_info": false, 00:19:55.300 "zone_management": false, 00:19:55.300 "zone_append": false, 00:19:55.300 "compare": false, 00:19:55.300 "compare_and_write": false, 00:19:55.300 "abort": false, 00:19:55.300 "seek_hole": false, 00:19:55.300 "seek_data": false, 00:19:55.300 "copy": false, 00:19:55.300 "nvme_iov_md": false 00:19:55.300 }, 00:19:55.300 "driver_specific": { 00:19:55.300 "raid": { 00:19:55.300 "uuid": "c60fde31-731d-447c-8662-88793b03812a", 00:19:55.300 "strip_size_kb": 64, 00:19:55.300 "state": "online", 00:19:55.300 "raid_level": "raid5f", 00:19:55.300 "superblock": true, 00:19:55.300 "num_base_bdevs": 4, 00:19:55.300 "num_base_bdevs_discovered": 4, 00:19:55.300 "num_base_bdevs_operational": 4, 00:19:55.300 "base_bdevs_list": [ 00:19:55.300 { 00:19:55.300 "name": "pt1", 00:19:55.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:55.300 "is_configured": true, 00:19:55.300 "data_offset": 2048, 00:19:55.300 "data_size": 63488 00:19:55.300 }, 00:19:55.300 { 00:19:55.300 "name": "pt2", 00:19:55.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:55.300 "is_configured": true, 00:19:55.300 "data_offset": 2048, 00:19:55.300 "data_size": 63488 00:19:55.300 }, 00:19:55.300 { 00:19:55.300 "name": "pt3", 00:19:55.300 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:55.300 "is_configured": true, 00:19:55.300 "data_offset": 2048, 00:19:55.300 "data_size": 63488 00:19:55.300 }, 00:19:55.300 { 00:19:55.300 "name": "pt4", 00:19:55.300 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:55.300 "is_configured": true, 00:19:55.300 "data_offset": 2048, 00:19:55.300 "data_size": 63488 00:19:55.300 } 00:19:55.300 ] 00:19:55.300 } 00:19:55.300 } 00:19:55.300 }' 00:19:55.300 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:55.560 pt2 00:19:55.560 pt3 00:19:55.560 pt4' 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.560 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.820 [2024-11-27 08:51:52.353665] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c60fde31-731d-447c-8662-88793b03812a 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c60fde31-731d-447c-8662-88793b03812a ']' 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.820 [2024-11-27 08:51:52.397434] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:55.820 [2024-11-27 08:51:52.397468] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:55.820 [2024-11-27 08:51:52.397583] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:55.820 [2024-11-27 08:51:52.397717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:55.820 [2024-11-27 08:51:52.397742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.820 [2024-11-27 08:51:52.545491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:55.820 [2024-11-27 08:51:52.548177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:55.820 [2024-11-27 08:51:52.548248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:55.820 [2024-11-27 08:51:52.548307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:55.820 [2024-11-27 08:51:52.548412] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:55.820 [2024-11-27 08:51:52.548489] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:55.820 [2024-11-27 08:51:52.548524] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:55.820 [2024-11-27 08:51:52.548556] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:55.820 [2024-11-27 08:51:52.548579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:55.820 [2024-11-27 08:51:52.548596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:55.820 request: 00:19:55.820 { 00:19:55.820 "name": "raid_bdev1", 00:19:55.820 "raid_level": "raid5f", 00:19:55.820 "base_bdevs": [ 00:19:55.820 "malloc1", 00:19:55.820 "malloc2", 00:19:55.820 "malloc3", 00:19:55.820 "malloc4" 00:19:55.820 ], 00:19:55.820 "strip_size_kb": 64, 00:19:55.820 "superblock": false, 00:19:55.820 "method": "bdev_raid_create", 00:19:55.820 "req_id": 1 00:19:55.820 } 00:19:55.820 Got JSON-RPC error response 00:19:55.820 response: 00:19:55.820 { 00:19:55.820 "code": -17, 00:19:55.820 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:55.820 } 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.820 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.080 [2024-11-27 08:51:52.609487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:56.080 [2024-11-27 08:51:52.609698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.080 [2024-11-27 08:51:52.609778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:56.080 [2024-11-27 08:51:52.609976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.080 [2024-11-27 08:51:52.613125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.080 [2024-11-27 08:51:52.613290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:56.080 [2024-11-27 08:51:52.613530] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:56.080 [2024-11-27 08:51:52.613730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:56.080 pt1 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.080 "name": "raid_bdev1", 00:19:56.080 "uuid": "c60fde31-731d-447c-8662-88793b03812a", 00:19:56.080 "strip_size_kb": 64, 00:19:56.080 "state": "configuring", 00:19:56.080 "raid_level": "raid5f", 00:19:56.080 "superblock": true, 00:19:56.080 "num_base_bdevs": 4, 00:19:56.080 "num_base_bdevs_discovered": 1, 00:19:56.080 "num_base_bdevs_operational": 4, 00:19:56.080 "base_bdevs_list": [ 00:19:56.080 { 00:19:56.080 "name": "pt1", 00:19:56.080 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:56.080 "is_configured": true, 00:19:56.080 "data_offset": 2048, 00:19:56.080 "data_size": 63488 00:19:56.080 }, 00:19:56.080 { 00:19:56.080 "name": null, 00:19:56.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:56.080 "is_configured": false, 00:19:56.080 "data_offset": 2048, 00:19:56.080 "data_size": 63488 00:19:56.080 }, 00:19:56.080 { 00:19:56.080 "name": null, 00:19:56.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:56.080 "is_configured": false, 00:19:56.080 "data_offset": 2048, 00:19:56.080 "data_size": 63488 00:19:56.080 }, 00:19:56.080 { 00:19:56.080 "name": null, 00:19:56.080 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:56.080 "is_configured": false, 00:19:56.080 "data_offset": 2048, 00:19:56.080 "data_size": 63488 00:19:56.080 } 00:19:56.080 ] 00:19:56.080 }' 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.080 08:51:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.648 [2024-11-27 08:51:53.125810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:56.648 [2024-11-27 08:51:53.125917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.648 [2024-11-27 08:51:53.125952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:56.648 [2024-11-27 08:51:53.125971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.648 [2024-11-27 08:51:53.126634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.648 [2024-11-27 08:51:53.126673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:56.648 [2024-11-27 08:51:53.126798] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:56.648 [2024-11-27 08:51:53.126838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:56.648 pt2 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.648 [2024-11-27 08:51:53.133766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.648 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.648 "name": "raid_bdev1", 00:19:56.649 "uuid": "c60fde31-731d-447c-8662-88793b03812a", 00:19:56.649 "strip_size_kb": 64, 00:19:56.649 "state": "configuring", 00:19:56.649 "raid_level": "raid5f", 00:19:56.649 "superblock": true, 00:19:56.649 "num_base_bdevs": 4, 00:19:56.649 "num_base_bdevs_discovered": 1, 00:19:56.649 "num_base_bdevs_operational": 4, 00:19:56.649 "base_bdevs_list": [ 00:19:56.649 { 00:19:56.649 "name": "pt1", 00:19:56.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:56.649 "is_configured": true, 00:19:56.649 "data_offset": 2048, 00:19:56.649 "data_size": 63488 00:19:56.649 }, 00:19:56.649 { 00:19:56.649 "name": null, 00:19:56.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:56.649 "is_configured": false, 00:19:56.649 "data_offset": 0, 00:19:56.649 "data_size": 63488 00:19:56.649 }, 00:19:56.649 { 00:19:56.649 "name": null, 00:19:56.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:56.649 "is_configured": false, 00:19:56.649 "data_offset": 2048, 00:19:56.649 "data_size": 63488 00:19:56.649 }, 00:19:56.649 { 00:19:56.649 "name": null, 00:19:56.649 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:56.649 "is_configured": false, 00:19:56.649 "data_offset": 2048, 00:19:56.649 "data_size": 63488 00:19:56.649 } 00:19:56.649 ] 00:19:56.649 }' 00:19:56.649 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.649 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.907 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:56.907 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:56.907 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:56.907 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.907 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.239 [2024-11-27 08:51:53.665924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:57.239 [2024-11-27 08:51:53.666016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.239 [2024-11-27 08:51:53.666053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:57.239 [2024-11-27 08:51:53.666069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.239 [2024-11-27 08:51:53.666755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.239 [2024-11-27 08:51:53.666782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:57.239 [2024-11-27 08:51:53.666901] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:57.239 [2024-11-27 08:51:53.666936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:57.239 pt2 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.239 [2024-11-27 08:51:53.677866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:57.239 [2024-11-27 08:51:53.677925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.239 [2024-11-27 08:51:53.677953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:57.239 [2024-11-27 08:51:53.677967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.239 [2024-11-27 08:51:53.678454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.239 [2024-11-27 08:51:53.678488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:57.239 [2024-11-27 08:51:53.678568] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:57.239 [2024-11-27 08:51:53.678595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:57.239 pt3 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.239 [2024-11-27 08:51:53.689847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:57.239 [2024-11-27 08:51:53.689909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.239 [2024-11-27 08:51:53.689939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:57.239 [2024-11-27 08:51:53.689953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.239 [2024-11-27 08:51:53.690468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.239 [2024-11-27 08:51:53.690495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:57.239 [2024-11-27 08:51:53.690577] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:57.239 [2024-11-27 08:51:53.690605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:57.239 [2024-11-27 08:51:53.690795] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:57.239 [2024-11-27 08:51:53.690819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:57.239 [2024-11-27 08:51:53.691147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:57.239 [2024-11-27 08:51:53.697693] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:57.239 [2024-11-27 08:51:53.697724] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:57.239 [2024-11-27 08:51:53.697937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.239 pt4 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:57.239 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.240 "name": "raid_bdev1", 00:19:57.240 "uuid": "c60fde31-731d-447c-8662-88793b03812a", 00:19:57.240 "strip_size_kb": 64, 00:19:57.240 "state": "online", 00:19:57.240 "raid_level": "raid5f", 00:19:57.240 "superblock": true, 00:19:57.240 "num_base_bdevs": 4, 00:19:57.240 "num_base_bdevs_discovered": 4, 00:19:57.240 "num_base_bdevs_operational": 4, 00:19:57.240 "base_bdevs_list": [ 00:19:57.240 { 00:19:57.240 "name": "pt1", 00:19:57.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:57.240 "is_configured": true, 00:19:57.240 "data_offset": 2048, 00:19:57.240 "data_size": 63488 00:19:57.240 }, 00:19:57.240 { 00:19:57.240 "name": "pt2", 00:19:57.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.240 "is_configured": true, 00:19:57.240 "data_offset": 2048, 00:19:57.240 "data_size": 63488 00:19:57.240 }, 00:19:57.240 { 00:19:57.240 "name": "pt3", 00:19:57.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:57.240 "is_configured": true, 00:19:57.240 "data_offset": 2048, 00:19:57.240 "data_size": 63488 00:19:57.240 }, 00:19:57.240 { 00:19:57.240 "name": "pt4", 00:19:57.240 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:57.240 "is_configured": true, 00:19:57.240 "data_offset": 2048, 00:19:57.240 "data_size": 63488 00:19:57.240 } 00:19:57.240 ] 00:19:57.240 }' 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.240 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.499 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:57.499 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:57.499 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:57.499 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:57.499 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:57.499 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:57.499 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:57.499 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:57.499 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.499 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.499 [2024-11-27 08:51:54.190325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.499 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.499 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:57.499 "name": "raid_bdev1", 00:19:57.499 "aliases": [ 00:19:57.499 "c60fde31-731d-447c-8662-88793b03812a" 00:19:57.499 ], 00:19:57.499 "product_name": "Raid Volume", 00:19:57.499 "block_size": 512, 00:19:57.499 "num_blocks": 190464, 00:19:57.499 "uuid": "c60fde31-731d-447c-8662-88793b03812a", 00:19:57.499 "assigned_rate_limits": { 00:19:57.499 "rw_ios_per_sec": 0, 00:19:57.499 "rw_mbytes_per_sec": 0, 00:19:57.499 "r_mbytes_per_sec": 0, 00:19:57.499 "w_mbytes_per_sec": 0 00:19:57.499 }, 00:19:57.499 "claimed": false, 00:19:57.499 "zoned": false, 00:19:57.499 "supported_io_types": { 00:19:57.499 "read": true, 00:19:57.499 "write": true, 00:19:57.499 "unmap": false, 00:19:57.499 "flush": false, 00:19:57.499 "reset": true, 00:19:57.499 "nvme_admin": false, 00:19:57.499 "nvme_io": false, 00:19:57.499 "nvme_io_md": false, 00:19:57.499 "write_zeroes": true, 00:19:57.499 "zcopy": false, 00:19:57.499 "get_zone_info": false, 00:19:57.499 "zone_management": false, 00:19:57.499 "zone_append": false, 00:19:57.499 "compare": false, 00:19:57.499 "compare_and_write": false, 00:19:57.499 "abort": false, 00:19:57.499 "seek_hole": false, 00:19:57.499 "seek_data": false, 00:19:57.499 "copy": false, 00:19:57.499 "nvme_iov_md": false 00:19:57.499 }, 00:19:57.499 "driver_specific": { 00:19:57.499 "raid": { 00:19:57.499 "uuid": "c60fde31-731d-447c-8662-88793b03812a", 00:19:57.499 "strip_size_kb": 64, 00:19:57.499 "state": "online", 00:19:57.499 "raid_level": "raid5f", 00:19:57.499 "superblock": true, 00:19:57.499 "num_base_bdevs": 4, 00:19:57.499 "num_base_bdevs_discovered": 4, 00:19:57.499 "num_base_bdevs_operational": 4, 00:19:57.499 "base_bdevs_list": [ 00:19:57.499 { 00:19:57.499 "name": "pt1", 00:19:57.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:57.499 "is_configured": true, 00:19:57.499 "data_offset": 2048, 00:19:57.499 "data_size": 63488 00:19:57.499 }, 00:19:57.499 { 00:19:57.499 "name": "pt2", 00:19:57.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:57.499 "is_configured": true, 00:19:57.499 "data_offset": 2048, 00:19:57.499 "data_size": 63488 00:19:57.499 }, 00:19:57.499 { 00:19:57.499 "name": "pt3", 00:19:57.499 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:57.499 "is_configured": true, 00:19:57.499 "data_offset": 2048, 00:19:57.499 "data_size": 63488 00:19:57.499 }, 00:19:57.499 { 00:19:57.499 "name": "pt4", 00:19:57.499 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:57.499 "is_configured": true, 00:19:57.499 "data_offset": 2048, 00:19:57.499 "data_size": 63488 00:19:57.499 } 00:19:57.499 ] 00:19:57.499 } 00:19:57.499 } 00:19:57.499 }' 00:19:57.499 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:57.758 pt2 00:19:57.758 pt3 00:19:57.758 pt4' 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.758 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.018 [2024-11-27 08:51:54.554301] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c60fde31-731d-447c-8662-88793b03812a '!=' c60fde31-731d-447c-8662-88793b03812a ']' 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.018 [2024-11-27 08:51:54.602153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.018 "name": "raid_bdev1", 00:19:58.018 "uuid": "c60fde31-731d-447c-8662-88793b03812a", 00:19:58.018 "strip_size_kb": 64, 00:19:58.018 "state": "online", 00:19:58.018 "raid_level": "raid5f", 00:19:58.018 "superblock": true, 00:19:58.018 "num_base_bdevs": 4, 00:19:58.018 "num_base_bdevs_discovered": 3, 00:19:58.018 "num_base_bdevs_operational": 3, 00:19:58.018 "base_bdevs_list": [ 00:19:58.018 { 00:19:58.018 "name": null, 00:19:58.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.018 "is_configured": false, 00:19:58.018 "data_offset": 0, 00:19:58.018 "data_size": 63488 00:19:58.018 }, 00:19:58.018 { 00:19:58.018 "name": "pt2", 00:19:58.018 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.018 "is_configured": true, 00:19:58.018 "data_offset": 2048, 00:19:58.018 "data_size": 63488 00:19:58.018 }, 00:19:58.018 { 00:19:58.018 "name": "pt3", 00:19:58.018 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:58.018 "is_configured": true, 00:19:58.018 "data_offset": 2048, 00:19:58.018 "data_size": 63488 00:19:58.018 }, 00:19:58.018 { 00:19:58.018 "name": "pt4", 00:19:58.018 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:58.018 "is_configured": true, 00:19:58.018 "data_offset": 2048, 00:19:58.018 "data_size": 63488 00:19:58.018 } 00:19:58.018 ] 00:19:58.018 }' 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.018 08:51:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.587 [2024-11-27 08:51:55.118326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.587 [2024-11-27 08:51:55.118411] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.587 [2024-11-27 08:51:55.118535] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.587 [2024-11-27 08:51:55.118652] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.587 [2024-11-27 08:51:55.118670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.587 [2024-11-27 08:51:55.210269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:58.587 [2024-11-27 08:51:55.210358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.587 [2024-11-27 08:51:55.210402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:58.587 [2024-11-27 08:51:55.210418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.587 [2024-11-27 08:51:55.213548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.587 [2024-11-27 08:51:55.213723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:58.587 [2024-11-27 08:51:55.213855] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:58.587 [2024-11-27 08:51:55.213925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:58.587 pt2 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.587 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.588 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.588 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.588 "name": "raid_bdev1", 00:19:58.588 "uuid": "c60fde31-731d-447c-8662-88793b03812a", 00:19:58.588 "strip_size_kb": 64, 00:19:58.588 "state": "configuring", 00:19:58.588 "raid_level": "raid5f", 00:19:58.588 "superblock": true, 00:19:58.588 "num_base_bdevs": 4, 00:19:58.588 "num_base_bdevs_discovered": 1, 00:19:58.588 "num_base_bdevs_operational": 3, 00:19:58.588 "base_bdevs_list": [ 00:19:58.588 { 00:19:58.588 "name": null, 00:19:58.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.588 "is_configured": false, 00:19:58.588 "data_offset": 2048, 00:19:58.588 "data_size": 63488 00:19:58.588 }, 00:19:58.588 { 00:19:58.588 "name": "pt2", 00:19:58.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.588 "is_configured": true, 00:19:58.588 "data_offset": 2048, 00:19:58.588 "data_size": 63488 00:19:58.588 }, 00:19:58.588 { 00:19:58.588 "name": null, 00:19:58.588 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:58.588 "is_configured": false, 00:19:58.588 "data_offset": 2048, 00:19:58.588 "data_size": 63488 00:19:58.588 }, 00:19:58.588 { 00:19:58.588 "name": null, 00:19:58.588 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:58.588 "is_configured": false, 00:19:58.588 "data_offset": 2048, 00:19:58.588 "data_size": 63488 00:19:58.588 } 00:19:58.588 ] 00:19:58.588 }' 00:19:58.588 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.588 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.155 [2024-11-27 08:51:55.742485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:59.155 [2024-11-27 08:51:55.742575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.155 [2024-11-27 08:51:55.742614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:59.155 [2024-11-27 08:51:55.742631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.155 [2024-11-27 08:51:55.743270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.155 [2024-11-27 08:51:55.743304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:59.155 [2024-11-27 08:51:55.743452] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:59.155 [2024-11-27 08:51:55.743496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:59.155 pt3 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.155 "name": "raid_bdev1", 00:19:59.155 "uuid": "c60fde31-731d-447c-8662-88793b03812a", 00:19:59.155 "strip_size_kb": 64, 00:19:59.155 "state": "configuring", 00:19:59.155 "raid_level": "raid5f", 00:19:59.155 "superblock": true, 00:19:59.155 "num_base_bdevs": 4, 00:19:59.155 "num_base_bdevs_discovered": 2, 00:19:59.155 "num_base_bdevs_operational": 3, 00:19:59.155 "base_bdevs_list": [ 00:19:59.155 { 00:19:59.155 "name": null, 00:19:59.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.155 "is_configured": false, 00:19:59.155 "data_offset": 2048, 00:19:59.155 "data_size": 63488 00:19:59.155 }, 00:19:59.155 { 00:19:59.155 "name": "pt2", 00:19:59.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:59.155 "is_configured": true, 00:19:59.155 "data_offset": 2048, 00:19:59.155 "data_size": 63488 00:19:59.155 }, 00:19:59.155 { 00:19:59.155 "name": "pt3", 00:19:59.155 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:59.155 "is_configured": true, 00:19:59.155 "data_offset": 2048, 00:19:59.155 "data_size": 63488 00:19:59.155 }, 00:19:59.155 { 00:19:59.155 "name": null, 00:19:59.155 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:59.155 "is_configured": false, 00:19:59.155 "data_offset": 2048, 00:19:59.155 "data_size": 63488 00:19:59.155 } 00:19:59.155 ] 00:19:59.155 }' 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.155 08:51:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.722 [2024-11-27 08:51:56.238632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:59.722 [2024-11-27 08:51:56.238734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.722 [2024-11-27 08:51:56.238773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:59.722 [2024-11-27 08:51:56.238790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.722 [2024-11-27 08:51:56.239462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.722 [2024-11-27 08:51:56.239489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:59.722 [2024-11-27 08:51:56.239612] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:59.722 [2024-11-27 08:51:56.239647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:59.722 [2024-11-27 08:51:56.239830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:59.722 [2024-11-27 08:51:56.239846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:59.722 [2024-11-27 08:51:56.240163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:59.722 [2024-11-27 08:51:56.246658] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:59.722 [2024-11-27 08:51:56.246714] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:59.722 [2024-11-27 08:51:56.247071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.722 pt4 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.722 "name": "raid_bdev1", 00:19:59.722 "uuid": "c60fde31-731d-447c-8662-88793b03812a", 00:19:59.722 "strip_size_kb": 64, 00:19:59.722 "state": "online", 00:19:59.722 "raid_level": "raid5f", 00:19:59.722 "superblock": true, 00:19:59.722 "num_base_bdevs": 4, 00:19:59.722 "num_base_bdevs_discovered": 3, 00:19:59.722 "num_base_bdevs_operational": 3, 00:19:59.722 "base_bdevs_list": [ 00:19:59.722 { 00:19:59.722 "name": null, 00:19:59.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.722 "is_configured": false, 00:19:59.722 "data_offset": 2048, 00:19:59.722 "data_size": 63488 00:19:59.722 }, 00:19:59.722 { 00:19:59.722 "name": "pt2", 00:19:59.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:59.722 "is_configured": true, 00:19:59.722 "data_offset": 2048, 00:19:59.722 "data_size": 63488 00:19:59.722 }, 00:19:59.722 { 00:19:59.722 "name": "pt3", 00:19:59.722 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:59.722 "is_configured": true, 00:19:59.722 "data_offset": 2048, 00:19:59.722 "data_size": 63488 00:19:59.722 }, 00:19:59.722 { 00:19:59.722 "name": "pt4", 00:19:59.722 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:59.722 "is_configured": true, 00:19:59.722 "data_offset": 2048, 00:19:59.722 "data_size": 63488 00:19:59.722 } 00:19:59.722 ] 00:19:59.722 }' 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.722 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.290 [2024-11-27 08:51:56.771104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.290 [2024-11-27 08:51:56.771145] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:00.290 [2024-11-27 08:51:56.771274] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.290 [2024-11-27 08:51:56.771411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.290 [2024-11-27 08:51:56.771445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.290 [2024-11-27 08:51:56.843074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:00.290 [2024-11-27 08:51:56.843311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.290 [2024-11-27 08:51:56.843482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:00.290 [2024-11-27 08:51:56.843601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.290 [2024-11-27 08:51:56.846888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.290 [2024-11-27 08:51:56.847059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:00.290 [2024-11-27 08:51:56.847185] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:00.290 [2024-11-27 08:51:56.847283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:00.290 [2024-11-27 08:51:56.847525] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:00.290 [2024-11-27 08:51:56.847550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.290 [2024-11-27 08:51:56.847574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:00.290 pt1 00:20:00.290 [2024-11-27 08:51:56.847654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:00.290 [2024-11-27 08:51:56.847819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.290 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.291 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.291 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.291 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.291 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.291 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.291 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.291 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.291 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.291 "name": "raid_bdev1", 00:20:00.291 "uuid": "c60fde31-731d-447c-8662-88793b03812a", 00:20:00.291 "strip_size_kb": 64, 00:20:00.291 "state": "configuring", 00:20:00.291 "raid_level": "raid5f", 00:20:00.291 "superblock": true, 00:20:00.291 "num_base_bdevs": 4, 00:20:00.291 "num_base_bdevs_discovered": 2, 00:20:00.291 "num_base_bdevs_operational": 3, 00:20:00.291 "base_bdevs_list": [ 00:20:00.291 { 00:20:00.291 "name": null, 00:20:00.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.291 "is_configured": false, 00:20:00.291 "data_offset": 2048, 00:20:00.291 "data_size": 63488 00:20:00.291 }, 00:20:00.291 { 00:20:00.291 "name": "pt2", 00:20:00.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:00.291 "is_configured": true, 00:20:00.291 "data_offset": 2048, 00:20:00.291 "data_size": 63488 00:20:00.291 }, 00:20:00.291 { 00:20:00.291 "name": "pt3", 00:20:00.291 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:00.291 "is_configured": true, 00:20:00.291 "data_offset": 2048, 00:20:00.291 "data_size": 63488 00:20:00.291 }, 00:20:00.291 { 00:20:00.291 "name": null, 00:20:00.291 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:00.291 "is_configured": false, 00:20:00.291 "data_offset": 2048, 00:20:00.291 "data_size": 63488 00:20:00.291 } 00:20:00.291 ] 00:20:00.291 }' 00:20:00.291 08:51:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.291 08:51:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.858 [2024-11-27 08:51:57.419459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:00.858 [2024-11-27 08:51:57.419549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.858 [2024-11-27 08:51:57.419591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:20:00.858 [2024-11-27 08:51:57.419608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.858 [2024-11-27 08:51:57.420230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.858 [2024-11-27 08:51:57.420257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:00.858 [2024-11-27 08:51:57.420392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:00.858 [2024-11-27 08:51:57.420437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:00.858 [2024-11-27 08:51:57.420624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:00.858 [2024-11-27 08:51:57.420641] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:00.858 [2024-11-27 08:51:57.420965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:00.858 [2024-11-27 08:51:57.427506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:00.858 [2024-11-27 08:51:57.427539] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:00.858 [2024-11-27 08:51:57.427866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.858 pt4 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.858 "name": "raid_bdev1", 00:20:00.858 "uuid": "c60fde31-731d-447c-8662-88793b03812a", 00:20:00.858 "strip_size_kb": 64, 00:20:00.858 "state": "online", 00:20:00.858 "raid_level": "raid5f", 00:20:00.858 "superblock": true, 00:20:00.858 "num_base_bdevs": 4, 00:20:00.858 "num_base_bdevs_discovered": 3, 00:20:00.858 "num_base_bdevs_operational": 3, 00:20:00.858 "base_bdevs_list": [ 00:20:00.858 { 00:20:00.858 "name": null, 00:20:00.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.858 "is_configured": false, 00:20:00.858 "data_offset": 2048, 00:20:00.858 "data_size": 63488 00:20:00.858 }, 00:20:00.858 { 00:20:00.858 "name": "pt2", 00:20:00.858 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:00.858 "is_configured": true, 00:20:00.858 "data_offset": 2048, 00:20:00.858 "data_size": 63488 00:20:00.858 }, 00:20:00.858 { 00:20:00.858 "name": "pt3", 00:20:00.858 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:00.858 "is_configured": true, 00:20:00.858 "data_offset": 2048, 00:20:00.858 "data_size": 63488 00:20:00.858 }, 00:20:00.858 { 00:20:00.858 "name": "pt4", 00:20:00.858 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:00.858 "is_configured": true, 00:20:00.858 "data_offset": 2048, 00:20:00.858 "data_size": 63488 00:20:00.858 } 00:20:00.858 ] 00:20:00.858 }' 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.858 08:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.423 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:01.423 08:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.423 08:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.423 08:51:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:01.423 08:51:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.423 [2024-11-27 08:51:58.012112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c60fde31-731d-447c-8662-88793b03812a '!=' c60fde31-731d-447c-8662-88793b03812a ']' 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84653 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # '[' -z 84653 ']' 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # kill -0 84653 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # uname 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 84653 00:20:01.423 killing process with pid 84653 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 84653' 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # kill 84653 00:20:01.423 [2024-11-27 08:51:58.096199] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:01.423 08:51:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@975 -- # wait 84653 00:20:01.423 [2024-11-27 08:51:58.096347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:01.423 [2024-11-27 08:51:58.096466] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:01.423 [2024-11-27 08:51:58.096489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:01.990 [2024-11-27 08:51:58.469353] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:02.968 ************************************ 00:20:02.968 END TEST raid5f_superblock_test 00:20:02.968 ************************************ 00:20:02.968 08:51:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:02.968 00:20:02.968 real 0m9.425s 00:20:02.968 user 0m15.302s 00:20:02.968 sys 0m1.462s 00:20:02.968 08:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:20:02.968 08:51:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.968 08:51:59 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:20:02.968 08:51:59 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:20:02.968 08:51:59 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:20:02.968 08:51:59 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:20:02.968 08:51:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:02.968 ************************************ 00:20:02.968 START TEST raid5f_rebuild_test 00:20:02.968 ************************************ 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # raid_rebuild_test raid5f 4 false false true 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85144 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85144 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@832 -- # '[' -z 85144 ']' 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local max_retries=100 00:20:02.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@841 -- # xtrace_disable 00:20:02.968 08:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.227 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:03.227 Zero copy mechanism will not be used. 00:20:03.227 [2024-11-27 08:51:59.740194] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:20:03.227 [2024-11-27 08:51:59.740371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85144 ] 00:20:03.227 [2024-11-27 08:51:59.911978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.486 [2024-11-27 08:52:00.057969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.745 [2024-11-27 08:52:00.280185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:03.745 [2024-11-27 08:52:00.280283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.003 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:20:04.003 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # return 0 00:20:04.003 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:04.003 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:04.003 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.003 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.262 BaseBdev1_malloc 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.262 [2024-11-27 08:52:00.777614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:04.262 [2024-11-27 08:52:00.777849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.262 [2024-11-27 08:52:00.777890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:04.262 [2024-11-27 08:52:00.777911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.262 [2024-11-27 08:52:00.780915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.262 [2024-11-27 08:52:00.781082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:04.262 BaseBdev1 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.262 BaseBdev2_malloc 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.262 [2024-11-27 08:52:00.837350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:04.262 [2024-11-27 08:52:00.837434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.262 [2024-11-27 08:52:00.837465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:04.262 [2024-11-27 08:52:00.837486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.262 [2024-11-27 08:52:00.840565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.262 [2024-11-27 08:52:00.840617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:04.262 BaseBdev2 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.262 BaseBdev3_malloc 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.262 [2024-11-27 08:52:00.905440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:04.262 [2024-11-27 08:52:00.905525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.262 [2024-11-27 08:52:00.905559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:04.262 [2024-11-27 08:52:00.905578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.262 [2024-11-27 08:52:00.908570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.262 [2024-11-27 08:52:00.908615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:04.262 BaseBdev3 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.262 BaseBdev4_malloc 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.262 [2024-11-27 08:52:00.965661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:04.262 [2024-11-27 08:52:00.965741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.262 [2024-11-27 08:52:00.965787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:04.262 [2024-11-27 08:52:00.965808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.262 [2024-11-27 08:52:00.968872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.262 [2024-11-27 08:52:00.968922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:04.262 BaseBdev4 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.262 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:04.263 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.263 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.263 spare_malloc 00:20:04.263 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.263 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:04.263 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.263 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.521 spare_delay 00:20:04.521 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.521 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:04.521 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.521 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.521 [2024-11-27 08:52:01.033635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:04.521 [2024-11-27 08:52:01.033717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.521 [2024-11-27 08:52:01.033755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:04.521 [2024-11-27 08:52:01.033775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.521 [2024-11-27 08:52:01.036811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.521 [2024-11-27 08:52:01.036858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:04.521 spare 00:20:04.521 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.521 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:04.521 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.521 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.521 [2024-11-27 08:52:01.045772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.521 [2024-11-27 08:52:01.048346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:04.521 [2024-11-27 08:52:01.048439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:04.521 [2024-11-27 08:52:01.048522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:04.521 [2024-11-27 08:52:01.048651] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:04.521 [2024-11-27 08:52:01.048672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:04.521 [2024-11-27 08:52:01.049010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:04.521 [2024-11-27 08:52:01.055856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:04.521 [2024-11-27 08:52:01.055888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:04.521 [2024-11-27 08:52:01.056155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.521 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.522 "name": "raid_bdev1", 00:20:04.522 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:04.522 "strip_size_kb": 64, 00:20:04.522 "state": "online", 00:20:04.522 "raid_level": "raid5f", 00:20:04.522 "superblock": false, 00:20:04.522 "num_base_bdevs": 4, 00:20:04.522 "num_base_bdevs_discovered": 4, 00:20:04.522 "num_base_bdevs_operational": 4, 00:20:04.522 "base_bdevs_list": [ 00:20:04.522 { 00:20:04.522 "name": "BaseBdev1", 00:20:04.522 "uuid": "0630d86a-c227-5da8-b649-2043c80c5e13", 00:20:04.522 "is_configured": true, 00:20:04.522 "data_offset": 0, 00:20:04.522 "data_size": 65536 00:20:04.522 }, 00:20:04.522 { 00:20:04.522 "name": "BaseBdev2", 00:20:04.522 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:04.522 "is_configured": true, 00:20:04.522 "data_offset": 0, 00:20:04.522 "data_size": 65536 00:20:04.522 }, 00:20:04.522 { 00:20:04.522 "name": "BaseBdev3", 00:20:04.522 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:04.522 "is_configured": true, 00:20:04.522 "data_offset": 0, 00:20:04.522 "data_size": 65536 00:20:04.522 }, 00:20:04.522 { 00:20:04.522 "name": "BaseBdev4", 00:20:04.522 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:04.522 "is_configured": true, 00:20:04.522 "data_offset": 0, 00:20:04.522 "data_size": 65536 00:20:04.522 } 00:20:04.522 ] 00:20:04.522 }' 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.522 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.090 [2024-11-27 08:52:01.560584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:05.090 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:05.350 [2024-11-27 08:52:01.924430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:05.350 /dev/nbd0 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local i 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # break 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:05.350 1+0 records in 00:20:05.350 1+0 records out 00:20:05.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415754 s, 9.9 MB/s 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # size=4096 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # return 0 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:20:05.350 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:20:05.916 512+0 records in 00:20:05.916 512+0 records out 00:20:05.916 100663296 bytes (101 MB, 96 MiB) copied, 0.652048 s, 154 MB/s 00:20:05.917 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:05.917 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:05.917 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:05.917 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:05.917 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:05.917 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:05.917 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:06.484 [2024-11-27 08:52:02.951079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.484 [2024-11-27 08:52:02.963395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.484 08:52:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.484 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.484 "name": "raid_bdev1", 00:20:06.484 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:06.484 "strip_size_kb": 64, 00:20:06.484 "state": "online", 00:20:06.484 "raid_level": "raid5f", 00:20:06.484 "superblock": false, 00:20:06.484 "num_base_bdevs": 4, 00:20:06.484 "num_base_bdevs_discovered": 3, 00:20:06.484 "num_base_bdevs_operational": 3, 00:20:06.484 "base_bdevs_list": [ 00:20:06.484 { 00:20:06.484 "name": null, 00:20:06.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.484 "is_configured": false, 00:20:06.484 "data_offset": 0, 00:20:06.484 "data_size": 65536 00:20:06.484 }, 00:20:06.484 { 00:20:06.484 "name": "BaseBdev2", 00:20:06.484 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:06.484 "is_configured": true, 00:20:06.484 "data_offset": 0, 00:20:06.484 "data_size": 65536 00:20:06.484 }, 00:20:06.484 { 00:20:06.484 "name": "BaseBdev3", 00:20:06.484 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:06.484 "is_configured": true, 00:20:06.484 "data_offset": 0, 00:20:06.484 "data_size": 65536 00:20:06.484 }, 00:20:06.484 { 00:20:06.484 "name": "BaseBdev4", 00:20:06.484 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:06.484 "is_configured": true, 00:20:06.484 "data_offset": 0, 00:20:06.484 "data_size": 65536 00:20:06.484 } 00:20:06.484 ] 00:20:06.484 }' 00:20:06.484 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.484 08:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.052 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:07.052 08:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.052 08:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.052 [2024-11-27 08:52:03.515573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:07.052 [2024-11-27 08:52:03.530579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:20:07.052 08:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.052 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:07.052 [2024-11-27 08:52:03.540066] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:07.987 "name": "raid_bdev1", 00:20:07.987 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:07.987 "strip_size_kb": 64, 00:20:07.987 "state": "online", 00:20:07.987 "raid_level": "raid5f", 00:20:07.987 "superblock": false, 00:20:07.987 "num_base_bdevs": 4, 00:20:07.987 "num_base_bdevs_discovered": 4, 00:20:07.987 "num_base_bdevs_operational": 4, 00:20:07.987 "process": { 00:20:07.987 "type": "rebuild", 00:20:07.987 "target": "spare", 00:20:07.987 "progress": { 00:20:07.987 "blocks": 17280, 00:20:07.987 "percent": 8 00:20:07.987 } 00:20:07.987 }, 00:20:07.987 "base_bdevs_list": [ 00:20:07.987 { 00:20:07.987 "name": "spare", 00:20:07.987 "uuid": "7d0f9eb0-a1a9-5ba3-b0d2-5f54624c0361", 00:20:07.987 "is_configured": true, 00:20:07.987 "data_offset": 0, 00:20:07.987 "data_size": 65536 00:20:07.987 }, 00:20:07.987 { 00:20:07.987 "name": "BaseBdev2", 00:20:07.987 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:07.987 "is_configured": true, 00:20:07.987 "data_offset": 0, 00:20:07.987 "data_size": 65536 00:20:07.987 }, 00:20:07.987 { 00:20:07.987 "name": "BaseBdev3", 00:20:07.987 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:07.987 "is_configured": true, 00:20:07.987 "data_offset": 0, 00:20:07.987 "data_size": 65536 00:20:07.987 }, 00:20:07.987 { 00:20:07.987 "name": "BaseBdev4", 00:20:07.987 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:07.987 "is_configured": true, 00:20:07.987 "data_offset": 0, 00:20:07.987 "data_size": 65536 00:20:07.987 } 00:20:07.987 ] 00:20:07.987 }' 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.987 08:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.987 [2024-11-27 08:52:04.687201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:08.245 [2024-11-27 08:52:04.755480] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:08.245 [2024-11-27 08:52:04.755626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.245 [2024-11-27 08:52:04.755655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:08.245 [2024-11-27 08:52:04.755672] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.245 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.245 "name": "raid_bdev1", 00:20:08.245 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:08.245 "strip_size_kb": 64, 00:20:08.245 "state": "online", 00:20:08.245 "raid_level": "raid5f", 00:20:08.245 "superblock": false, 00:20:08.245 "num_base_bdevs": 4, 00:20:08.245 "num_base_bdevs_discovered": 3, 00:20:08.245 "num_base_bdevs_operational": 3, 00:20:08.245 "base_bdevs_list": [ 00:20:08.245 { 00:20:08.245 "name": null, 00:20:08.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.245 "is_configured": false, 00:20:08.245 "data_offset": 0, 00:20:08.245 "data_size": 65536 00:20:08.245 }, 00:20:08.245 { 00:20:08.245 "name": "BaseBdev2", 00:20:08.245 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:08.245 "is_configured": true, 00:20:08.246 "data_offset": 0, 00:20:08.246 "data_size": 65536 00:20:08.246 }, 00:20:08.246 { 00:20:08.246 "name": "BaseBdev3", 00:20:08.246 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:08.246 "is_configured": true, 00:20:08.246 "data_offset": 0, 00:20:08.246 "data_size": 65536 00:20:08.246 }, 00:20:08.246 { 00:20:08.246 "name": "BaseBdev4", 00:20:08.246 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:08.246 "is_configured": true, 00:20:08.246 "data_offset": 0, 00:20:08.246 "data_size": 65536 00:20:08.246 } 00:20:08.246 ] 00:20:08.246 }' 00:20:08.246 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.246 08:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.813 "name": "raid_bdev1", 00:20:08.813 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:08.813 "strip_size_kb": 64, 00:20:08.813 "state": "online", 00:20:08.813 "raid_level": "raid5f", 00:20:08.813 "superblock": false, 00:20:08.813 "num_base_bdevs": 4, 00:20:08.813 "num_base_bdevs_discovered": 3, 00:20:08.813 "num_base_bdevs_operational": 3, 00:20:08.813 "base_bdevs_list": [ 00:20:08.813 { 00:20:08.813 "name": null, 00:20:08.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.813 "is_configured": false, 00:20:08.813 "data_offset": 0, 00:20:08.813 "data_size": 65536 00:20:08.813 }, 00:20:08.813 { 00:20:08.813 "name": "BaseBdev2", 00:20:08.813 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:08.813 "is_configured": true, 00:20:08.813 "data_offset": 0, 00:20:08.813 "data_size": 65536 00:20:08.813 }, 00:20:08.813 { 00:20:08.813 "name": "BaseBdev3", 00:20:08.813 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:08.813 "is_configured": true, 00:20:08.813 "data_offset": 0, 00:20:08.813 "data_size": 65536 00:20:08.813 }, 00:20:08.813 { 00:20:08.813 "name": "BaseBdev4", 00:20:08.813 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:08.813 "is_configured": true, 00:20:08.813 "data_offset": 0, 00:20:08.813 "data_size": 65536 00:20:08.813 } 00:20:08.813 ] 00:20:08.813 }' 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.813 [2024-11-27 08:52:05.453207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:08.813 [2024-11-27 08:52:05.466990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.813 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:08.813 [2024-11-27 08:52:05.476086] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:09.748 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.748 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.748 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:09.748 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:09.748 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.748 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.748 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.748 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.748 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.748 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.007 "name": "raid_bdev1", 00:20:10.007 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:10.007 "strip_size_kb": 64, 00:20:10.007 "state": "online", 00:20:10.007 "raid_level": "raid5f", 00:20:10.007 "superblock": false, 00:20:10.007 "num_base_bdevs": 4, 00:20:10.007 "num_base_bdevs_discovered": 4, 00:20:10.007 "num_base_bdevs_operational": 4, 00:20:10.007 "process": { 00:20:10.007 "type": "rebuild", 00:20:10.007 "target": "spare", 00:20:10.007 "progress": { 00:20:10.007 "blocks": 17280, 00:20:10.007 "percent": 8 00:20:10.007 } 00:20:10.007 }, 00:20:10.007 "base_bdevs_list": [ 00:20:10.007 { 00:20:10.007 "name": "spare", 00:20:10.007 "uuid": "7d0f9eb0-a1a9-5ba3-b0d2-5f54624c0361", 00:20:10.007 "is_configured": true, 00:20:10.007 "data_offset": 0, 00:20:10.007 "data_size": 65536 00:20:10.007 }, 00:20:10.007 { 00:20:10.007 "name": "BaseBdev2", 00:20:10.007 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:10.007 "is_configured": true, 00:20:10.007 "data_offset": 0, 00:20:10.007 "data_size": 65536 00:20:10.007 }, 00:20:10.007 { 00:20:10.007 "name": "BaseBdev3", 00:20:10.007 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:10.007 "is_configured": true, 00:20:10.007 "data_offset": 0, 00:20:10.007 "data_size": 65536 00:20:10.007 }, 00:20:10.007 { 00:20:10.007 "name": "BaseBdev4", 00:20:10.007 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:10.007 "is_configured": true, 00:20:10.007 "data_offset": 0, 00:20:10.007 "data_size": 65536 00:20:10.007 } 00:20:10.007 ] 00:20:10.007 }' 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=680 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.007 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.007 "name": "raid_bdev1", 00:20:10.007 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:10.007 "strip_size_kb": 64, 00:20:10.007 "state": "online", 00:20:10.007 "raid_level": "raid5f", 00:20:10.007 "superblock": false, 00:20:10.007 "num_base_bdevs": 4, 00:20:10.007 "num_base_bdevs_discovered": 4, 00:20:10.007 "num_base_bdevs_operational": 4, 00:20:10.007 "process": { 00:20:10.007 "type": "rebuild", 00:20:10.007 "target": "spare", 00:20:10.007 "progress": { 00:20:10.007 "blocks": 21120, 00:20:10.007 "percent": 10 00:20:10.007 } 00:20:10.007 }, 00:20:10.007 "base_bdevs_list": [ 00:20:10.007 { 00:20:10.007 "name": "spare", 00:20:10.007 "uuid": "7d0f9eb0-a1a9-5ba3-b0d2-5f54624c0361", 00:20:10.007 "is_configured": true, 00:20:10.007 "data_offset": 0, 00:20:10.007 "data_size": 65536 00:20:10.008 }, 00:20:10.008 { 00:20:10.008 "name": "BaseBdev2", 00:20:10.008 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:10.008 "is_configured": true, 00:20:10.008 "data_offset": 0, 00:20:10.008 "data_size": 65536 00:20:10.008 }, 00:20:10.008 { 00:20:10.008 "name": "BaseBdev3", 00:20:10.008 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:10.008 "is_configured": true, 00:20:10.008 "data_offset": 0, 00:20:10.008 "data_size": 65536 00:20:10.008 }, 00:20:10.008 { 00:20:10.008 "name": "BaseBdev4", 00:20:10.008 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:10.008 "is_configured": true, 00:20:10.008 "data_offset": 0, 00:20:10.008 "data_size": 65536 00:20:10.008 } 00:20:10.008 ] 00:20:10.008 }' 00:20:10.008 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.008 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:10.008 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.267 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:10.267 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:11.203 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:11.203 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:11.203 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:11.203 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:11.203 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:11.203 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:11.203 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.203 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.203 08:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.203 08:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.203 08:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.203 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:11.203 "name": "raid_bdev1", 00:20:11.203 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:11.203 "strip_size_kb": 64, 00:20:11.203 "state": "online", 00:20:11.203 "raid_level": "raid5f", 00:20:11.203 "superblock": false, 00:20:11.203 "num_base_bdevs": 4, 00:20:11.203 "num_base_bdevs_discovered": 4, 00:20:11.203 "num_base_bdevs_operational": 4, 00:20:11.203 "process": { 00:20:11.203 "type": "rebuild", 00:20:11.203 "target": "spare", 00:20:11.203 "progress": { 00:20:11.203 "blocks": 42240, 00:20:11.203 "percent": 21 00:20:11.203 } 00:20:11.203 }, 00:20:11.203 "base_bdevs_list": [ 00:20:11.203 { 00:20:11.203 "name": "spare", 00:20:11.203 "uuid": "7d0f9eb0-a1a9-5ba3-b0d2-5f54624c0361", 00:20:11.203 "is_configured": true, 00:20:11.203 "data_offset": 0, 00:20:11.203 "data_size": 65536 00:20:11.203 }, 00:20:11.203 { 00:20:11.203 "name": "BaseBdev2", 00:20:11.203 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:11.203 "is_configured": true, 00:20:11.203 "data_offset": 0, 00:20:11.203 "data_size": 65536 00:20:11.203 }, 00:20:11.203 { 00:20:11.203 "name": "BaseBdev3", 00:20:11.203 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:11.203 "is_configured": true, 00:20:11.203 "data_offset": 0, 00:20:11.203 "data_size": 65536 00:20:11.203 }, 00:20:11.203 { 00:20:11.203 "name": "BaseBdev4", 00:20:11.203 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:11.203 "is_configured": true, 00:20:11.203 "data_offset": 0, 00:20:11.203 "data_size": 65536 00:20:11.204 } 00:20:11.204 ] 00:20:11.204 }' 00:20:11.204 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:11.204 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:11.204 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:11.462 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:11.462 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:12.399 08:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:12.399 08:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.399 08:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.399 08:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:12.399 08:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:12.399 08:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.399 08:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.399 08:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.399 08:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.399 08:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.399 08:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.399 08:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.399 "name": "raid_bdev1", 00:20:12.399 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:12.399 "strip_size_kb": 64, 00:20:12.399 "state": "online", 00:20:12.399 "raid_level": "raid5f", 00:20:12.399 "superblock": false, 00:20:12.399 "num_base_bdevs": 4, 00:20:12.399 "num_base_bdevs_discovered": 4, 00:20:12.399 "num_base_bdevs_operational": 4, 00:20:12.399 "process": { 00:20:12.399 "type": "rebuild", 00:20:12.399 "target": "spare", 00:20:12.399 "progress": { 00:20:12.399 "blocks": 65280, 00:20:12.399 "percent": 33 00:20:12.399 } 00:20:12.399 }, 00:20:12.399 "base_bdevs_list": [ 00:20:12.399 { 00:20:12.399 "name": "spare", 00:20:12.399 "uuid": "7d0f9eb0-a1a9-5ba3-b0d2-5f54624c0361", 00:20:12.399 "is_configured": true, 00:20:12.399 "data_offset": 0, 00:20:12.399 "data_size": 65536 00:20:12.399 }, 00:20:12.399 { 00:20:12.399 "name": "BaseBdev2", 00:20:12.399 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:12.399 "is_configured": true, 00:20:12.399 "data_offset": 0, 00:20:12.399 "data_size": 65536 00:20:12.399 }, 00:20:12.399 { 00:20:12.399 "name": "BaseBdev3", 00:20:12.399 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:12.399 "is_configured": true, 00:20:12.399 "data_offset": 0, 00:20:12.399 "data_size": 65536 00:20:12.399 }, 00:20:12.399 { 00:20:12.399 "name": "BaseBdev4", 00:20:12.399 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:12.399 "is_configured": true, 00:20:12.399 "data_offset": 0, 00:20:12.399 "data_size": 65536 00:20:12.399 } 00:20:12.399 ] 00:20:12.399 }' 00:20:12.399 08:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.399 08:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.400 08:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.400 08:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.400 08:52:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:13.795 08:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:13.795 08:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.795 08:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.795 08:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:13.795 08:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:13.795 08:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.795 08:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.795 08:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.795 08:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.795 08:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.795 08:52:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.795 08:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.795 "name": "raid_bdev1", 00:20:13.795 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:13.795 "strip_size_kb": 64, 00:20:13.795 "state": "online", 00:20:13.795 "raid_level": "raid5f", 00:20:13.795 "superblock": false, 00:20:13.795 "num_base_bdevs": 4, 00:20:13.795 "num_base_bdevs_discovered": 4, 00:20:13.795 "num_base_bdevs_operational": 4, 00:20:13.795 "process": { 00:20:13.795 "type": "rebuild", 00:20:13.795 "target": "spare", 00:20:13.795 "progress": { 00:20:13.795 "blocks": 88320, 00:20:13.795 "percent": 44 00:20:13.795 } 00:20:13.795 }, 00:20:13.795 "base_bdevs_list": [ 00:20:13.795 { 00:20:13.795 "name": "spare", 00:20:13.795 "uuid": "7d0f9eb0-a1a9-5ba3-b0d2-5f54624c0361", 00:20:13.796 "is_configured": true, 00:20:13.796 "data_offset": 0, 00:20:13.796 "data_size": 65536 00:20:13.796 }, 00:20:13.796 { 00:20:13.796 "name": "BaseBdev2", 00:20:13.796 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:13.796 "is_configured": true, 00:20:13.796 "data_offset": 0, 00:20:13.796 "data_size": 65536 00:20:13.796 }, 00:20:13.796 { 00:20:13.796 "name": "BaseBdev3", 00:20:13.796 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:13.796 "is_configured": true, 00:20:13.796 "data_offset": 0, 00:20:13.796 "data_size": 65536 00:20:13.796 }, 00:20:13.796 { 00:20:13.796 "name": "BaseBdev4", 00:20:13.796 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:13.796 "is_configured": true, 00:20:13.796 "data_offset": 0, 00:20:13.796 "data_size": 65536 00:20:13.796 } 00:20:13.796 ] 00:20:13.796 }' 00:20:13.796 08:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.796 08:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.796 08:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.796 08:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.796 08:52:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.751 "name": "raid_bdev1", 00:20:14.751 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:14.751 "strip_size_kb": 64, 00:20:14.751 "state": "online", 00:20:14.751 "raid_level": "raid5f", 00:20:14.751 "superblock": false, 00:20:14.751 "num_base_bdevs": 4, 00:20:14.751 "num_base_bdevs_discovered": 4, 00:20:14.751 "num_base_bdevs_operational": 4, 00:20:14.751 "process": { 00:20:14.751 "type": "rebuild", 00:20:14.751 "target": "spare", 00:20:14.751 "progress": { 00:20:14.751 "blocks": 109440, 00:20:14.751 "percent": 55 00:20:14.751 } 00:20:14.751 }, 00:20:14.751 "base_bdevs_list": [ 00:20:14.751 { 00:20:14.751 "name": "spare", 00:20:14.751 "uuid": "7d0f9eb0-a1a9-5ba3-b0d2-5f54624c0361", 00:20:14.751 "is_configured": true, 00:20:14.751 "data_offset": 0, 00:20:14.751 "data_size": 65536 00:20:14.751 }, 00:20:14.751 { 00:20:14.751 "name": "BaseBdev2", 00:20:14.751 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:14.751 "is_configured": true, 00:20:14.751 "data_offset": 0, 00:20:14.751 "data_size": 65536 00:20:14.751 }, 00:20:14.751 { 00:20:14.751 "name": "BaseBdev3", 00:20:14.751 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:14.751 "is_configured": true, 00:20:14.751 "data_offset": 0, 00:20:14.751 "data_size": 65536 00:20:14.751 }, 00:20:14.751 { 00:20:14.751 "name": "BaseBdev4", 00:20:14.751 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:14.751 "is_configured": true, 00:20:14.751 "data_offset": 0, 00:20:14.751 "data_size": 65536 00:20:14.751 } 00:20:14.751 ] 00:20:14.751 }' 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:14.751 08:52:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:16.126 08:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:16.126 08:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.126 08:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.126 08:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.126 08:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.126 08:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.126 08:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.126 08:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.126 08:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.126 08:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.126 08:52:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.126 08:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.126 "name": "raid_bdev1", 00:20:16.126 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:16.126 "strip_size_kb": 64, 00:20:16.126 "state": "online", 00:20:16.126 "raid_level": "raid5f", 00:20:16.126 "superblock": false, 00:20:16.126 "num_base_bdevs": 4, 00:20:16.126 "num_base_bdevs_discovered": 4, 00:20:16.126 "num_base_bdevs_operational": 4, 00:20:16.126 "process": { 00:20:16.126 "type": "rebuild", 00:20:16.126 "target": "spare", 00:20:16.126 "progress": { 00:20:16.126 "blocks": 132480, 00:20:16.126 "percent": 67 00:20:16.126 } 00:20:16.126 }, 00:20:16.126 "base_bdevs_list": [ 00:20:16.126 { 00:20:16.126 "name": "spare", 00:20:16.126 "uuid": "7d0f9eb0-a1a9-5ba3-b0d2-5f54624c0361", 00:20:16.126 "is_configured": true, 00:20:16.126 "data_offset": 0, 00:20:16.126 "data_size": 65536 00:20:16.126 }, 00:20:16.126 { 00:20:16.126 "name": "BaseBdev2", 00:20:16.126 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:16.126 "is_configured": true, 00:20:16.126 "data_offset": 0, 00:20:16.126 "data_size": 65536 00:20:16.126 }, 00:20:16.126 { 00:20:16.126 "name": "BaseBdev3", 00:20:16.126 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:16.126 "is_configured": true, 00:20:16.126 "data_offset": 0, 00:20:16.126 "data_size": 65536 00:20:16.126 }, 00:20:16.126 { 00:20:16.126 "name": "BaseBdev4", 00:20:16.126 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:16.126 "is_configured": true, 00:20:16.126 "data_offset": 0, 00:20:16.126 "data_size": 65536 00:20:16.126 } 00:20:16.126 ] 00:20:16.126 }' 00:20:16.126 08:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.127 08:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.127 08:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.127 08:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.127 08:52:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.063 "name": "raid_bdev1", 00:20:17.063 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:17.063 "strip_size_kb": 64, 00:20:17.063 "state": "online", 00:20:17.063 "raid_level": "raid5f", 00:20:17.063 "superblock": false, 00:20:17.063 "num_base_bdevs": 4, 00:20:17.063 "num_base_bdevs_discovered": 4, 00:20:17.063 "num_base_bdevs_operational": 4, 00:20:17.063 "process": { 00:20:17.063 "type": "rebuild", 00:20:17.063 "target": "spare", 00:20:17.063 "progress": { 00:20:17.063 "blocks": 153600, 00:20:17.063 "percent": 78 00:20:17.063 } 00:20:17.063 }, 00:20:17.063 "base_bdevs_list": [ 00:20:17.063 { 00:20:17.063 "name": "spare", 00:20:17.063 "uuid": "7d0f9eb0-a1a9-5ba3-b0d2-5f54624c0361", 00:20:17.063 "is_configured": true, 00:20:17.063 "data_offset": 0, 00:20:17.063 "data_size": 65536 00:20:17.063 }, 00:20:17.063 { 00:20:17.063 "name": "BaseBdev2", 00:20:17.063 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:17.063 "is_configured": true, 00:20:17.063 "data_offset": 0, 00:20:17.063 "data_size": 65536 00:20:17.063 }, 00:20:17.063 { 00:20:17.063 "name": "BaseBdev3", 00:20:17.063 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:17.063 "is_configured": true, 00:20:17.063 "data_offset": 0, 00:20:17.063 "data_size": 65536 00:20:17.063 }, 00:20:17.063 { 00:20:17.063 "name": "BaseBdev4", 00:20:17.063 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:17.063 "is_configured": true, 00:20:17.063 "data_offset": 0, 00:20:17.063 "data_size": 65536 00:20:17.063 } 00:20:17.063 ] 00:20:17.063 }' 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.063 08:52:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.438 "name": "raid_bdev1", 00:20:18.438 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:18.438 "strip_size_kb": 64, 00:20:18.438 "state": "online", 00:20:18.438 "raid_level": "raid5f", 00:20:18.438 "superblock": false, 00:20:18.438 "num_base_bdevs": 4, 00:20:18.438 "num_base_bdevs_discovered": 4, 00:20:18.438 "num_base_bdevs_operational": 4, 00:20:18.438 "process": { 00:20:18.438 "type": "rebuild", 00:20:18.438 "target": "spare", 00:20:18.438 "progress": { 00:20:18.438 "blocks": 176640, 00:20:18.438 "percent": 89 00:20:18.438 } 00:20:18.438 }, 00:20:18.438 "base_bdevs_list": [ 00:20:18.438 { 00:20:18.438 "name": "spare", 00:20:18.438 "uuid": "7d0f9eb0-a1a9-5ba3-b0d2-5f54624c0361", 00:20:18.438 "is_configured": true, 00:20:18.438 "data_offset": 0, 00:20:18.438 "data_size": 65536 00:20:18.438 }, 00:20:18.438 { 00:20:18.438 "name": "BaseBdev2", 00:20:18.438 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:18.438 "is_configured": true, 00:20:18.438 "data_offset": 0, 00:20:18.438 "data_size": 65536 00:20:18.438 }, 00:20:18.438 { 00:20:18.438 "name": "BaseBdev3", 00:20:18.438 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:18.438 "is_configured": true, 00:20:18.438 "data_offset": 0, 00:20:18.438 "data_size": 65536 00:20:18.438 }, 00:20:18.438 { 00:20:18.438 "name": "BaseBdev4", 00:20:18.438 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:18.438 "is_configured": true, 00:20:18.438 "data_offset": 0, 00:20:18.438 "data_size": 65536 00:20:18.438 } 00:20:18.438 ] 00:20:18.438 }' 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.438 08:52:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:19.374 [2024-11-27 08:52:15.901779] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:19.374 [2024-11-27 08:52:15.901901] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:19.374 [2024-11-27 08:52:15.901963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.374 08:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:19.374 08:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.374 08:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.374 08:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.374 08:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.374 08:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.374 08:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.374 08:52:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.374 08:52:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.374 08:52:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.374 08:52:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.374 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.374 "name": "raid_bdev1", 00:20:19.374 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:19.374 "strip_size_kb": 64, 00:20:19.374 "state": "online", 00:20:19.374 "raid_level": "raid5f", 00:20:19.374 "superblock": false, 00:20:19.374 "num_base_bdevs": 4, 00:20:19.374 "num_base_bdevs_discovered": 4, 00:20:19.374 "num_base_bdevs_operational": 4, 00:20:19.374 "base_bdevs_list": [ 00:20:19.374 { 00:20:19.374 "name": "spare", 00:20:19.374 "uuid": "7d0f9eb0-a1a9-5ba3-b0d2-5f54624c0361", 00:20:19.374 "is_configured": true, 00:20:19.374 "data_offset": 0, 00:20:19.374 "data_size": 65536 00:20:19.374 }, 00:20:19.374 { 00:20:19.374 "name": "BaseBdev2", 00:20:19.374 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:19.374 "is_configured": true, 00:20:19.374 "data_offset": 0, 00:20:19.374 "data_size": 65536 00:20:19.374 }, 00:20:19.374 { 00:20:19.374 "name": "BaseBdev3", 00:20:19.374 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:19.374 "is_configured": true, 00:20:19.374 "data_offset": 0, 00:20:19.374 "data_size": 65536 00:20:19.374 }, 00:20:19.374 { 00:20:19.374 "name": "BaseBdev4", 00:20:19.374 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:19.374 "is_configured": true, 00:20:19.374 "data_offset": 0, 00:20:19.374 "data_size": 65536 00:20:19.374 } 00:20:19.374 ] 00:20:19.374 }' 00:20:19.374 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.374 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:19.374 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.374 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:19.374 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:19.374 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.374 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.374 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:19.374 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:19.374 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.374 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.374 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.374 08:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.374 08:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.633 "name": "raid_bdev1", 00:20:19.633 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:19.633 "strip_size_kb": 64, 00:20:19.633 "state": "online", 00:20:19.633 "raid_level": "raid5f", 00:20:19.633 "superblock": false, 00:20:19.633 "num_base_bdevs": 4, 00:20:19.633 "num_base_bdevs_discovered": 4, 00:20:19.633 "num_base_bdevs_operational": 4, 00:20:19.633 "base_bdevs_list": [ 00:20:19.633 { 00:20:19.633 "name": "spare", 00:20:19.633 "uuid": "7d0f9eb0-a1a9-5ba3-b0d2-5f54624c0361", 00:20:19.633 "is_configured": true, 00:20:19.633 "data_offset": 0, 00:20:19.633 "data_size": 65536 00:20:19.633 }, 00:20:19.633 { 00:20:19.633 "name": "BaseBdev2", 00:20:19.633 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:19.633 "is_configured": true, 00:20:19.633 "data_offset": 0, 00:20:19.633 "data_size": 65536 00:20:19.633 }, 00:20:19.633 { 00:20:19.633 "name": "BaseBdev3", 00:20:19.633 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:19.633 "is_configured": true, 00:20:19.633 "data_offset": 0, 00:20:19.633 "data_size": 65536 00:20:19.633 }, 00:20:19.633 { 00:20:19.633 "name": "BaseBdev4", 00:20:19.633 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:19.633 "is_configured": true, 00:20:19.633 "data_offset": 0, 00:20:19.633 "data_size": 65536 00:20:19.633 } 00:20:19.633 ] 00:20:19.633 }' 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.633 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.633 "name": "raid_bdev1", 00:20:19.633 "uuid": "a59fcb56-f628-43ba-b977-2e37d8ff2503", 00:20:19.633 "strip_size_kb": 64, 00:20:19.633 "state": "online", 00:20:19.633 "raid_level": "raid5f", 00:20:19.633 "superblock": false, 00:20:19.633 "num_base_bdevs": 4, 00:20:19.633 "num_base_bdevs_discovered": 4, 00:20:19.633 "num_base_bdevs_operational": 4, 00:20:19.633 "base_bdevs_list": [ 00:20:19.633 { 00:20:19.633 "name": "spare", 00:20:19.633 "uuid": "7d0f9eb0-a1a9-5ba3-b0d2-5f54624c0361", 00:20:19.633 "is_configured": true, 00:20:19.633 "data_offset": 0, 00:20:19.633 "data_size": 65536 00:20:19.633 }, 00:20:19.633 { 00:20:19.633 "name": "BaseBdev2", 00:20:19.633 "uuid": "aa68cb6d-41fb-50af-8c7b-69138b380ff4", 00:20:19.633 "is_configured": true, 00:20:19.633 "data_offset": 0, 00:20:19.633 "data_size": 65536 00:20:19.633 }, 00:20:19.633 { 00:20:19.633 "name": "BaseBdev3", 00:20:19.633 "uuid": "1f372727-1530-5e1e-9ad7-8465eab47916", 00:20:19.633 "is_configured": true, 00:20:19.633 "data_offset": 0, 00:20:19.633 "data_size": 65536 00:20:19.633 }, 00:20:19.633 { 00:20:19.633 "name": "BaseBdev4", 00:20:19.633 "uuid": "e5a72a3a-d72d-5aa9-9c54-f461a27e7638", 00:20:19.633 "is_configured": true, 00:20:19.633 "data_offset": 0, 00:20:19.633 "data_size": 65536 00:20:19.633 } 00:20:19.633 ] 00:20:19.633 }' 00:20:19.634 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.634 08:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.201 [2024-11-27 08:52:16.822881] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:20.201 [2024-11-27 08:52:16.822939] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:20.201 [2024-11-27 08:52:16.823076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:20.201 [2024-11-27 08:52:16.823229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:20.201 [2024-11-27 08:52:16.823254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:20.201 08:52:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:20.460 /dev/nbd0 00:20:20.460 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:20.460 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:20.460 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:20:20.460 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local i 00:20:20.460 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:20.461 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:20.461 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:20:20.720 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # break 00:20:20.720 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:20:20.720 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:20:20.720 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:20.720 1+0 records in 00:20:20.720 1+0 records out 00:20:20.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328508 s, 12.5 MB/s 00:20:20.720 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:20.720 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # size=4096 00:20:20.720 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:20.720 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:20:20.720 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # return 0 00:20:20.720 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:20.720 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:20.720 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:20.979 /dev/nbd1 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local i 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # break 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:20.979 1+0 records in 00:20:20.979 1+0 records out 00:20:20.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392871 s, 10.4 MB/s 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # size=4096 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # return 0 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:20.979 08:52:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:21.567 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:21.567 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:21.567 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:21.567 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:21.567 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:21.567 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:21.567 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:21.567 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:21.567 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:21.567 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85144 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # '[' -z 85144 ']' 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # kill -0 85144 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # uname 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 85144 00:20:21.826 killing process with pid 85144 00:20:21.826 Received shutdown signal, test time was about 60.000000 seconds 00:20:21.826 00:20:21.826 Latency(us) 00:20:21.826 [2024-11-27T08:52:18.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.826 [2024-11-27T08:52:18.586Z] =================================================================================================================== 00:20:21.826 [2024-11-27T08:52:18.586Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # echo 'killing process with pid 85144' 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # kill 85144 00:20:21.826 [2024-11-27 08:52:18.429732] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:21.826 08:52:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@975 -- # wait 85144 00:20:22.393 [2024-11-27 08:52:18.880146] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:23.330 08:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:23.330 ************************************ 00:20:23.330 END TEST raid5f_rebuild_test 00:20:23.330 ************************************ 00:20:23.330 00:20:23.330 real 0m20.347s 00:20:23.330 user 0m25.233s 00:20:23.330 sys 0m2.443s 00:20:23.330 08:52:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:20:23.330 08:52:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.330 08:52:20 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:20:23.330 08:52:20 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:20:23.330 08:52:20 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:20:23.330 08:52:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:23.330 ************************************ 00:20:23.330 START TEST raid5f_rebuild_test_sb 00:20:23.330 ************************************ 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # raid_rebuild_test raid5f 4 true false true 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:23.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85653 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85653 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@832 -- # '[' -z 85653 ']' 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local max_retries=100 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@841 -- # xtrace_disable 00:20:23.330 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.590 [2024-11-27 08:52:20.165124] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:20:23.590 [2024-11-27 08:52:20.165587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85653 ] 00:20:23.590 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:23.590 Zero copy mechanism will not be used. 00:20:23.849 [2024-11-27 08:52:20.353291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.849 [2024-11-27 08:52:20.501136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.109 [2024-11-27 08:52:20.726727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.109 [2024-11-27 08:52:20.726796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # return 0 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.678 BaseBdev1_malloc 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.678 [2024-11-27 08:52:21.196120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:24.678 [2024-11-27 08:52:21.196223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.678 [2024-11-27 08:52:21.196260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:24.678 [2024-11-27 08:52:21.196279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.678 [2024-11-27 08:52:21.199313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.678 [2024-11-27 08:52:21.199398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:24.678 BaseBdev1 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.678 BaseBdev2_malloc 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.678 [2024-11-27 08:52:21.252513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:24.678 [2024-11-27 08:52:21.252724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.678 [2024-11-27 08:52:21.252765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:24.678 [2024-11-27 08:52:21.252787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.678 [2024-11-27 08:52:21.255771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.678 [2024-11-27 08:52:21.255960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:24.678 BaseBdev2 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.678 BaseBdev3_malloc 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.678 [2024-11-27 08:52:21.315832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:24.678 [2024-11-27 08:52:21.315937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.678 [2024-11-27 08:52:21.315970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:24.678 [2024-11-27 08:52:21.315990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.678 [2024-11-27 08:52:21.318914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.678 [2024-11-27 08:52:21.318966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:24.678 BaseBdev3 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.678 BaseBdev4_malloc 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.678 [2024-11-27 08:52:21.372626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:24.678 [2024-11-27 08:52:21.372702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.678 [2024-11-27 08:52:21.372732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:24.678 [2024-11-27 08:52:21.372750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.678 [2024-11-27 08:52:21.375769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.678 [2024-11-27 08:52:21.375951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:24.678 BaseBdev4 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.678 spare_malloc 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.678 spare_delay 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.678 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.936 [2024-11-27 08:52:21.436491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:24.936 [2024-11-27 08:52:21.436568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.936 [2024-11-27 08:52:21.436599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:24.936 [2024-11-27 08:52:21.436617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.936 [2024-11-27 08:52:21.439559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.936 [2024-11-27 08:52:21.439611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:24.936 spare 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.936 [2024-11-27 08:52:21.444562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:24.936 [2024-11-27 08:52:21.447136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:24.936 [2024-11-27 08:52:21.447453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:24.936 [2024-11-27 08:52:21.447551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:24.936 [2024-11-27 08:52:21.447814] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:24.936 [2024-11-27 08:52:21.447840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:24.936 [2024-11-27 08:52:21.448167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:24.936 [2024-11-27 08:52:21.455124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:24.936 [2024-11-27 08:52:21.455149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:24.936 [2024-11-27 08:52:21.455439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.936 "name": "raid_bdev1", 00:20:24.936 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:24.936 "strip_size_kb": 64, 00:20:24.936 "state": "online", 00:20:24.936 "raid_level": "raid5f", 00:20:24.936 "superblock": true, 00:20:24.936 "num_base_bdevs": 4, 00:20:24.936 "num_base_bdevs_discovered": 4, 00:20:24.936 "num_base_bdevs_operational": 4, 00:20:24.936 "base_bdevs_list": [ 00:20:24.936 { 00:20:24.936 "name": "BaseBdev1", 00:20:24.936 "uuid": "6f34126f-a72a-50b7-b23d-d9b40e929402", 00:20:24.936 "is_configured": true, 00:20:24.936 "data_offset": 2048, 00:20:24.936 "data_size": 63488 00:20:24.936 }, 00:20:24.936 { 00:20:24.936 "name": "BaseBdev2", 00:20:24.936 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:24.936 "is_configured": true, 00:20:24.936 "data_offset": 2048, 00:20:24.936 "data_size": 63488 00:20:24.936 }, 00:20:24.936 { 00:20:24.936 "name": "BaseBdev3", 00:20:24.936 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:24.936 "is_configured": true, 00:20:24.936 "data_offset": 2048, 00:20:24.936 "data_size": 63488 00:20:24.936 }, 00:20:24.936 { 00:20:24.936 "name": "BaseBdev4", 00:20:24.936 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:24.936 "is_configured": true, 00:20:24.936 "data_offset": 2048, 00:20:24.936 "data_size": 63488 00:20:24.936 } 00:20:24.936 ] 00:20:24.936 }' 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.936 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.502 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:25.502 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:25.502 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.502 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.502 [2024-11-27 08:52:21.979954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:25.502 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:25.761 [2024-11-27 08:52:22.347618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:25.761 /dev/nbd0 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local i 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # break 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:25.761 1+0 records in 00:20:25.761 1+0 records out 00:20:25.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031883 s, 12.8 MB/s 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # size=4096 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # return 0 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:20:25.761 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:20:26.381 496+0 records in 00:20:26.381 496+0 records out 00:20:26.381 97517568 bytes (98 MB, 93 MiB) copied, 0.621019 s, 157 MB/s 00:20:26.381 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:26.381 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:26.381 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:26.381 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:26.381 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:26.381 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:26.381 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:26.639 [2024-11-27 08:52:23.336553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.639 [2024-11-27 08:52:23.344832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.639 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.639 "name": "raid_bdev1", 00:20:26.639 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:26.640 "strip_size_kb": 64, 00:20:26.640 "state": "online", 00:20:26.640 "raid_level": "raid5f", 00:20:26.640 "superblock": true, 00:20:26.640 "num_base_bdevs": 4, 00:20:26.640 "num_base_bdevs_discovered": 3, 00:20:26.640 "num_base_bdevs_operational": 3, 00:20:26.640 "base_bdevs_list": [ 00:20:26.640 { 00:20:26.640 "name": null, 00:20:26.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.640 "is_configured": false, 00:20:26.640 "data_offset": 0, 00:20:26.640 "data_size": 63488 00:20:26.640 }, 00:20:26.640 { 00:20:26.640 "name": "BaseBdev2", 00:20:26.640 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:26.640 "is_configured": true, 00:20:26.640 "data_offset": 2048, 00:20:26.640 "data_size": 63488 00:20:26.640 }, 00:20:26.640 { 00:20:26.640 "name": "BaseBdev3", 00:20:26.640 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:26.640 "is_configured": true, 00:20:26.640 "data_offset": 2048, 00:20:26.640 "data_size": 63488 00:20:26.640 }, 00:20:26.640 { 00:20:26.640 "name": "BaseBdev4", 00:20:26.640 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:26.640 "is_configured": true, 00:20:26.640 "data_offset": 2048, 00:20:26.640 "data_size": 63488 00:20:26.640 } 00:20:26.640 ] 00:20:26.640 }' 00:20:26.640 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.640 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.205 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:27.205 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.205 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.205 [2024-11-27 08:52:23.885010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:27.206 [2024-11-27 08:52:23.899793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:20:27.206 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.206 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:27.206 [2024-11-27 08:52:23.909139] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:28.581 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.581 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.581 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:28.581 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:28.581 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.581 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.581 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.581 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.581 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.581 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.581 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.581 "name": "raid_bdev1", 00:20:28.581 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:28.581 "strip_size_kb": 64, 00:20:28.581 "state": "online", 00:20:28.581 "raid_level": "raid5f", 00:20:28.581 "superblock": true, 00:20:28.581 "num_base_bdevs": 4, 00:20:28.581 "num_base_bdevs_discovered": 4, 00:20:28.581 "num_base_bdevs_operational": 4, 00:20:28.581 "process": { 00:20:28.581 "type": "rebuild", 00:20:28.581 "target": "spare", 00:20:28.581 "progress": { 00:20:28.581 "blocks": 17280, 00:20:28.581 "percent": 9 00:20:28.581 } 00:20:28.582 }, 00:20:28.582 "base_bdevs_list": [ 00:20:28.582 { 00:20:28.582 "name": "spare", 00:20:28.582 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:28.582 "is_configured": true, 00:20:28.582 "data_offset": 2048, 00:20:28.582 "data_size": 63488 00:20:28.582 }, 00:20:28.582 { 00:20:28.582 "name": "BaseBdev2", 00:20:28.582 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:28.582 "is_configured": true, 00:20:28.582 "data_offset": 2048, 00:20:28.582 "data_size": 63488 00:20:28.582 }, 00:20:28.582 { 00:20:28.582 "name": "BaseBdev3", 00:20:28.582 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:28.582 "is_configured": true, 00:20:28.582 "data_offset": 2048, 00:20:28.582 "data_size": 63488 00:20:28.582 }, 00:20:28.582 { 00:20:28.582 "name": "BaseBdev4", 00:20:28.582 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:28.582 "is_configured": true, 00:20:28.582 "data_offset": 2048, 00:20:28.582 "data_size": 63488 00:20:28.582 } 00:20:28.582 ] 00:20:28.582 }' 00:20:28.582 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.582 [2024-11-27 08:52:25.059153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.582 [2024-11-27 08:52:25.121175] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:28.582 [2024-11-27 08:52:25.121301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.582 [2024-11-27 08:52:25.121357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.582 [2024-11-27 08:52:25.121377] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.582 "name": "raid_bdev1", 00:20:28.582 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:28.582 "strip_size_kb": 64, 00:20:28.582 "state": "online", 00:20:28.582 "raid_level": "raid5f", 00:20:28.582 "superblock": true, 00:20:28.582 "num_base_bdevs": 4, 00:20:28.582 "num_base_bdevs_discovered": 3, 00:20:28.582 "num_base_bdevs_operational": 3, 00:20:28.582 "base_bdevs_list": [ 00:20:28.582 { 00:20:28.582 "name": null, 00:20:28.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.582 "is_configured": false, 00:20:28.582 "data_offset": 0, 00:20:28.582 "data_size": 63488 00:20:28.582 }, 00:20:28.582 { 00:20:28.582 "name": "BaseBdev2", 00:20:28.582 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:28.582 "is_configured": true, 00:20:28.582 "data_offset": 2048, 00:20:28.582 "data_size": 63488 00:20:28.582 }, 00:20:28.582 { 00:20:28.582 "name": "BaseBdev3", 00:20:28.582 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:28.582 "is_configured": true, 00:20:28.582 "data_offset": 2048, 00:20:28.582 "data_size": 63488 00:20:28.582 }, 00:20:28.582 { 00:20:28.582 "name": "BaseBdev4", 00:20:28.582 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:28.582 "is_configured": true, 00:20:28.582 "data_offset": 2048, 00:20:28.582 "data_size": 63488 00:20:28.582 } 00:20:28.582 ] 00:20:28.582 }' 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.582 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.149 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:29.149 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.149 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:29.149 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:29.149 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.149 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.149 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.149 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.149 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.149 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.149 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.149 "name": "raid_bdev1", 00:20:29.149 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:29.149 "strip_size_kb": 64, 00:20:29.149 "state": "online", 00:20:29.149 "raid_level": "raid5f", 00:20:29.149 "superblock": true, 00:20:29.149 "num_base_bdevs": 4, 00:20:29.149 "num_base_bdevs_discovered": 3, 00:20:29.149 "num_base_bdevs_operational": 3, 00:20:29.149 "base_bdevs_list": [ 00:20:29.149 { 00:20:29.149 "name": null, 00:20:29.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.149 "is_configured": false, 00:20:29.149 "data_offset": 0, 00:20:29.149 "data_size": 63488 00:20:29.149 }, 00:20:29.149 { 00:20:29.149 "name": "BaseBdev2", 00:20:29.149 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:29.149 "is_configured": true, 00:20:29.149 "data_offset": 2048, 00:20:29.149 "data_size": 63488 00:20:29.149 }, 00:20:29.150 { 00:20:29.150 "name": "BaseBdev3", 00:20:29.150 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:29.150 "is_configured": true, 00:20:29.150 "data_offset": 2048, 00:20:29.150 "data_size": 63488 00:20:29.150 }, 00:20:29.150 { 00:20:29.150 "name": "BaseBdev4", 00:20:29.150 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:29.150 "is_configured": true, 00:20:29.150 "data_offset": 2048, 00:20:29.150 "data_size": 63488 00:20:29.150 } 00:20:29.150 ] 00:20:29.150 }' 00:20:29.150 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.150 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:29.150 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:29.150 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:29.150 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:29.150 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.150 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.444 [2024-11-27 08:52:25.907204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.445 [2024-11-27 08:52:25.921086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:20:29.445 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.445 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:29.445 [2024-11-27 08:52:25.930384] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:30.381 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.381 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.381 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:30.381 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:30.381 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.381 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.381 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.381 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.381 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.381 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.381 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.381 "name": "raid_bdev1", 00:20:30.381 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:30.381 "strip_size_kb": 64, 00:20:30.381 "state": "online", 00:20:30.381 "raid_level": "raid5f", 00:20:30.381 "superblock": true, 00:20:30.381 "num_base_bdevs": 4, 00:20:30.381 "num_base_bdevs_discovered": 4, 00:20:30.381 "num_base_bdevs_operational": 4, 00:20:30.381 "process": { 00:20:30.381 "type": "rebuild", 00:20:30.381 "target": "spare", 00:20:30.381 "progress": { 00:20:30.381 "blocks": 17280, 00:20:30.381 "percent": 9 00:20:30.381 } 00:20:30.381 }, 00:20:30.381 "base_bdevs_list": [ 00:20:30.381 { 00:20:30.381 "name": "spare", 00:20:30.381 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:30.381 "is_configured": true, 00:20:30.381 "data_offset": 2048, 00:20:30.381 "data_size": 63488 00:20:30.381 }, 00:20:30.381 { 00:20:30.381 "name": "BaseBdev2", 00:20:30.381 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:30.381 "is_configured": true, 00:20:30.381 "data_offset": 2048, 00:20:30.381 "data_size": 63488 00:20:30.381 }, 00:20:30.381 { 00:20:30.381 "name": "BaseBdev3", 00:20:30.381 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:30.381 "is_configured": true, 00:20:30.381 "data_offset": 2048, 00:20:30.381 "data_size": 63488 00:20:30.381 }, 00:20:30.381 { 00:20:30.381 "name": "BaseBdev4", 00:20:30.381 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:30.381 "is_configured": true, 00:20:30.381 "data_offset": 2048, 00:20:30.381 "data_size": 63488 00:20:30.381 } 00:20:30.381 ] 00:20:30.381 }' 00:20:30.381 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:30.381 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=701 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.381 "name": "raid_bdev1", 00:20:30.381 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:30.381 "strip_size_kb": 64, 00:20:30.381 "state": "online", 00:20:30.381 "raid_level": "raid5f", 00:20:30.381 "superblock": true, 00:20:30.381 "num_base_bdevs": 4, 00:20:30.381 "num_base_bdevs_discovered": 4, 00:20:30.381 "num_base_bdevs_operational": 4, 00:20:30.381 "process": { 00:20:30.381 "type": "rebuild", 00:20:30.381 "target": "spare", 00:20:30.381 "progress": { 00:20:30.381 "blocks": 21120, 00:20:30.381 "percent": 11 00:20:30.381 } 00:20:30.381 }, 00:20:30.381 "base_bdevs_list": [ 00:20:30.381 { 00:20:30.381 "name": "spare", 00:20:30.381 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:30.381 "is_configured": true, 00:20:30.381 "data_offset": 2048, 00:20:30.381 "data_size": 63488 00:20:30.381 }, 00:20:30.381 { 00:20:30.381 "name": "BaseBdev2", 00:20:30.381 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:30.381 "is_configured": true, 00:20:30.381 "data_offset": 2048, 00:20:30.381 "data_size": 63488 00:20:30.381 }, 00:20:30.381 { 00:20:30.381 "name": "BaseBdev3", 00:20:30.381 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:30.381 "is_configured": true, 00:20:30.381 "data_offset": 2048, 00:20:30.381 "data_size": 63488 00:20:30.381 }, 00:20:30.381 { 00:20:30.381 "name": "BaseBdev4", 00:20:30.381 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:30.381 "is_configured": true, 00:20:30.381 "data_offset": 2048, 00:20:30.381 "data_size": 63488 00:20:30.381 } 00:20:30.381 ] 00:20:30.381 }' 00:20:30.381 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.639 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.639 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.639 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.639 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:31.575 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:31.575 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.575 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.575 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:31.575 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:31.575 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.575 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.575 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.575 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.575 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.575 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.575 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.575 "name": "raid_bdev1", 00:20:31.575 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:31.575 "strip_size_kb": 64, 00:20:31.575 "state": "online", 00:20:31.575 "raid_level": "raid5f", 00:20:31.575 "superblock": true, 00:20:31.575 "num_base_bdevs": 4, 00:20:31.575 "num_base_bdevs_discovered": 4, 00:20:31.575 "num_base_bdevs_operational": 4, 00:20:31.575 "process": { 00:20:31.575 "type": "rebuild", 00:20:31.575 "target": "spare", 00:20:31.575 "progress": { 00:20:31.575 "blocks": 42240, 00:20:31.575 "percent": 22 00:20:31.575 } 00:20:31.575 }, 00:20:31.575 "base_bdevs_list": [ 00:20:31.575 { 00:20:31.575 "name": "spare", 00:20:31.575 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:31.575 "is_configured": true, 00:20:31.575 "data_offset": 2048, 00:20:31.575 "data_size": 63488 00:20:31.575 }, 00:20:31.575 { 00:20:31.575 "name": "BaseBdev2", 00:20:31.575 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:31.575 "is_configured": true, 00:20:31.575 "data_offset": 2048, 00:20:31.575 "data_size": 63488 00:20:31.575 }, 00:20:31.575 { 00:20:31.576 "name": "BaseBdev3", 00:20:31.576 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:31.576 "is_configured": true, 00:20:31.576 "data_offset": 2048, 00:20:31.576 "data_size": 63488 00:20:31.576 }, 00:20:31.576 { 00:20:31.576 "name": "BaseBdev4", 00:20:31.576 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:31.576 "is_configured": true, 00:20:31.576 "data_offset": 2048, 00:20:31.576 "data_size": 63488 00:20:31.576 } 00:20:31.576 ] 00:20:31.576 }' 00:20:31.576 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.834 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.834 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.834 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.834 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:32.770 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:32.770 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.770 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.770 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:32.770 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:32.770 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.770 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.770 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.770 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.770 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.770 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.770 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.770 "name": "raid_bdev1", 00:20:32.770 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:32.770 "strip_size_kb": 64, 00:20:32.770 "state": "online", 00:20:32.770 "raid_level": "raid5f", 00:20:32.770 "superblock": true, 00:20:32.770 "num_base_bdevs": 4, 00:20:32.770 "num_base_bdevs_discovered": 4, 00:20:32.770 "num_base_bdevs_operational": 4, 00:20:32.770 "process": { 00:20:32.770 "type": "rebuild", 00:20:32.770 "target": "spare", 00:20:32.770 "progress": { 00:20:32.770 "blocks": 65280, 00:20:32.770 "percent": 34 00:20:32.770 } 00:20:32.770 }, 00:20:32.770 "base_bdevs_list": [ 00:20:32.770 { 00:20:32.770 "name": "spare", 00:20:32.770 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:32.770 "is_configured": true, 00:20:32.770 "data_offset": 2048, 00:20:32.770 "data_size": 63488 00:20:32.770 }, 00:20:32.770 { 00:20:32.770 "name": "BaseBdev2", 00:20:32.770 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:32.770 "is_configured": true, 00:20:32.770 "data_offset": 2048, 00:20:32.770 "data_size": 63488 00:20:32.770 }, 00:20:32.770 { 00:20:32.770 "name": "BaseBdev3", 00:20:32.770 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:32.770 "is_configured": true, 00:20:32.770 "data_offset": 2048, 00:20:32.770 "data_size": 63488 00:20:32.770 }, 00:20:32.770 { 00:20:32.770 "name": "BaseBdev4", 00:20:32.770 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:32.770 "is_configured": true, 00:20:32.770 "data_offset": 2048, 00:20:32.770 "data_size": 63488 00:20:32.770 } 00:20:32.770 ] 00:20:32.770 }' 00:20:32.770 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:33.029 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:33.029 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:33.029 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:33.029 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:33.994 "name": "raid_bdev1", 00:20:33.994 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:33.994 "strip_size_kb": 64, 00:20:33.994 "state": "online", 00:20:33.994 "raid_level": "raid5f", 00:20:33.994 "superblock": true, 00:20:33.994 "num_base_bdevs": 4, 00:20:33.994 "num_base_bdevs_discovered": 4, 00:20:33.994 "num_base_bdevs_operational": 4, 00:20:33.994 "process": { 00:20:33.994 "type": "rebuild", 00:20:33.994 "target": "spare", 00:20:33.994 "progress": { 00:20:33.994 "blocks": 88320, 00:20:33.994 "percent": 46 00:20:33.994 } 00:20:33.994 }, 00:20:33.994 "base_bdevs_list": [ 00:20:33.994 { 00:20:33.994 "name": "spare", 00:20:33.994 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:33.994 "is_configured": true, 00:20:33.994 "data_offset": 2048, 00:20:33.994 "data_size": 63488 00:20:33.994 }, 00:20:33.994 { 00:20:33.994 "name": "BaseBdev2", 00:20:33.994 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:33.994 "is_configured": true, 00:20:33.994 "data_offset": 2048, 00:20:33.994 "data_size": 63488 00:20:33.994 }, 00:20:33.994 { 00:20:33.994 "name": "BaseBdev3", 00:20:33.994 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:33.994 "is_configured": true, 00:20:33.994 "data_offset": 2048, 00:20:33.994 "data_size": 63488 00:20:33.994 }, 00:20:33.994 { 00:20:33.994 "name": "BaseBdev4", 00:20:33.994 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:33.994 "is_configured": true, 00:20:33.994 "data_offset": 2048, 00:20:33.994 "data_size": 63488 00:20:33.994 } 00:20:33.994 ] 00:20:33.994 }' 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:33.994 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:35.368 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:35.368 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:35.368 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:35.368 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:35.368 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:35.368 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:35.368 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.369 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.369 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.369 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.369 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.369 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:35.369 "name": "raid_bdev1", 00:20:35.369 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:35.369 "strip_size_kb": 64, 00:20:35.369 "state": "online", 00:20:35.369 "raid_level": "raid5f", 00:20:35.369 "superblock": true, 00:20:35.369 "num_base_bdevs": 4, 00:20:35.369 "num_base_bdevs_discovered": 4, 00:20:35.369 "num_base_bdevs_operational": 4, 00:20:35.369 "process": { 00:20:35.369 "type": "rebuild", 00:20:35.369 "target": "spare", 00:20:35.369 "progress": { 00:20:35.369 "blocks": 109440, 00:20:35.369 "percent": 57 00:20:35.369 } 00:20:35.369 }, 00:20:35.369 "base_bdevs_list": [ 00:20:35.369 { 00:20:35.369 "name": "spare", 00:20:35.369 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:35.369 "is_configured": true, 00:20:35.369 "data_offset": 2048, 00:20:35.369 "data_size": 63488 00:20:35.369 }, 00:20:35.369 { 00:20:35.369 "name": "BaseBdev2", 00:20:35.369 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:35.369 "is_configured": true, 00:20:35.369 "data_offset": 2048, 00:20:35.369 "data_size": 63488 00:20:35.369 }, 00:20:35.369 { 00:20:35.369 "name": "BaseBdev3", 00:20:35.369 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:35.369 "is_configured": true, 00:20:35.369 "data_offset": 2048, 00:20:35.369 "data_size": 63488 00:20:35.369 }, 00:20:35.369 { 00:20:35.369 "name": "BaseBdev4", 00:20:35.369 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:35.369 "is_configured": true, 00:20:35.369 "data_offset": 2048, 00:20:35.369 "data_size": 63488 00:20:35.369 } 00:20:35.369 ] 00:20:35.369 }' 00:20:35.369 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.369 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:35.369 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:35.369 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.369 08:52:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:36.305 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:36.305 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.305 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.305 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:36.305 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:36.305 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.305 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.305 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.305 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.305 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.305 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.305 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.305 "name": "raid_bdev1", 00:20:36.305 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:36.305 "strip_size_kb": 64, 00:20:36.305 "state": "online", 00:20:36.305 "raid_level": "raid5f", 00:20:36.305 "superblock": true, 00:20:36.305 "num_base_bdevs": 4, 00:20:36.305 "num_base_bdevs_discovered": 4, 00:20:36.305 "num_base_bdevs_operational": 4, 00:20:36.305 "process": { 00:20:36.305 "type": "rebuild", 00:20:36.305 "target": "spare", 00:20:36.305 "progress": { 00:20:36.305 "blocks": 132480, 00:20:36.305 "percent": 69 00:20:36.305 } 00:20:36.305 }, 00:20:36.305 "base_bdevs_list": [ 00:20:36.305 { 00:20:36.305 "name": "spare", 00:20:36.305 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:36.305 "is_configured": true, 00:20:36.305 "data_offset": 2048, 00:20:36.305 "data_size": 63488 00:20:36.305 }, 00:20:36.305 { 00:20:36.305 "name": "BaseBdev2", 00:20:36.305 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:36.305 "is_configured": true, 00:20:36.305 "data_offset": 2048, 00:20:36.305 "data_size": 63488 00:20:36.305 }, 00:20:36.305 { 00:20:36.305 "name": "BaseBdev3", 00:20:36.305 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:36.305 "is_configured": true, 00:20:36.305 "data_offset": 2048, 00:20:36.305 "data_size": 63488 00:20:36.305 }, 00:20:36.305 { 00:20:36.305 "name": "BaseBdev4", 00:20:36.305 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:36.305 "is_configured": true, 00:20:36.305 "data_offset": 2048, 00:20:36.305 "data_size": 63488 00:20:36.305 } 00:20:36.305 ] 00:20:36.305 }' 00:20:36.305 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.305 08:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:36.305 08:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.565 08:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.565 08:52:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.502 "name": "raid_bdev1", 00:20:37.502 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:37.502 "strip_size_kb": 64, 00:20:37.502 "state": "online", 00:20:37.502 "raid_level": "raid5f", 00:20:37.502 "superblock": true, 00:20:37.502 "num_base_bdevs": 4, 00:20:37.502 "num_base_bdevs_discovered": 4, 00:20:37.502 "num_base_bdevs_operational": 4, 00:20:37.502 "process": { 00:20:37.502 "type": "rebuild", 00:20:37.502 "target": "spare", 00:20:37.502 "progress": { 00:20:37.502 "blocks": 153600, 00:20:37.502 "percent": 80 00:20:37.502 } 00:20:37.502 }, 00:20:37.502 "base_bdevs_list": [ 00:20:37.502 { 00:20:37.502 "name": "spare", 00:20:37.502 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:37.502 "is_configured": true, 00:20:37.502 "data_offset": 2048, 00:20:37.502 "data_size": 63488 00:20:37.502 }, 00:20:37.502 { 00:20:37.502 "name": "BaseBdev2", 00:20:37.502 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:37.502 "is_configured": true, 00:20:37.502 "data_offset": 2048, 00:20:37.502 "data_size": 63488 00:20:37.502 }, 00:20:37.502 { 00:20:37.502 "name": "BaseBdev3", 00:20:37.502 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:37.502 "is_configured": true, 00:20:37.502 "data_offset": 2048, 00:20:37.502 "data_size": 63488 00:20:37.502 }, 00:20:37.502 { 00:20:37.502 "name": "BaseBdev4", 00:20:37.502 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:37.502 "is_configured": true, 00:20:37.502 "data_offset": 2048, 00:20:37.502 "data_size": 63488 00:20:37.502 } 00:20:37.502 ] 00:20:37.502 }' 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.502 08:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:38.878 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:38.878 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:38.878 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.878 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:38.878 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:38.878 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.878 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.878 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.878 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.878 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.878 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.878 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.878 "name": "raid_bdev1", 00:20:38.878 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:38.878 "strip_size_kb": 64, 00:20:38.878 "state": "online", 00:20:38.879 "raid_level": "raid5f", 00:20:38.879 "superblock": true, 00:20:38.879 "num_base_bdevs": 4, 00:20:38.879 "num_base_bdevs_discovered": 4, 00:20:38.879 "num_base_bdevs_operational": 4, 00:20:38.879 "process": { 00:20:38.879 "type": "rebuild", 00:20:38.879 "target": "spare", 00:20:38.879 "progress": { 00:20:38.879 "blocks": 176640, 00:20:38.879 "percent": 92 00:20:38.879 } 00:20:38.879 }, 00:20:38.879 "base_bdevs_list": [ 00:20:38.879 { 00:20:38.879 "name": "spare", 00:20:38.879 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:38.879 "is_configured": true, 00:20:38.879 "data_offset": 2048, 00:20:38.879 "data_size": 63488 00:20:38.879 }, 00:20:38.879 { 00:20:38.879 "name": "BaseBdev2", 00:20:38.879 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:38.879 "is_configured": true, 00:20:38.879 "data_offset": 2048, 00:20:38.879 "data_size": 63488 00:20:38.879 }, 00:20:38.879 { 00:20:38.879 "name": "BaseBdev3", 00:20:38.879 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:38.879 "is_configured": true, 00:20:38.879 "data_offset": 2048, 00:20:38.879 "data_size": 63488 00:20:38.879 }, 00:20:38.879 { 00:20:38.879 "name": "BaseBdev4", 00:20:38.879 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:38.879 "is_configured": true, 00:20:38.879 "data_offset": 2048, 00:20:38.879 "data_size": 63488 00:20:38.879 } 00:20:38.879 ] 00:20:38.879 }' 00:20:38.879 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.879 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:38.879 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:38.879 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:38.879 08:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:39.447 [2024-11-27 08:52:36.040144] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:39.447 [2024-11-27 08:52:36.040254] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:39.447 [2024-11-27 08:52:36.040520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.706 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:39.706 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.706 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.706 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:39.706 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:39.706 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.706 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.706 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.706 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.706 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.706 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.965 "name": "raid_bdev1", 00:20:39.965 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:39.965 "strip_size_kb": 64, 00:20:39.965 "state": "online", 00:20:39.965 "raid_level": "raid5f", 00:20:39.965 "superblock": true, 00:20:39.965 "num_base_bdevs": 4, 00:20:39.965 "num_base_bdevs_discovered": 4, 00:20:39.965 "num_base_bdevs_operational": 4, 00:20:39.965 "base_bdevs_list": [ 00:20:39.965 { 00:20:39.965 "name": "spare", 00:20:39.965 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:39.965 "is_configured": true, 00:20:39.965 "data_offset": 2048, 00:20:39.965 "data_size": 63488 00:20:39.965 }, 00:20:39.965 { 00:20:39.965 "name": "BaseBdev2", 00:20:39.965 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:39.965 "is_configured": true, 00:20:39.965 "data_offset": 2048, 00:20:39.965 "data_size": 63488 00:20:39.965 }, 00:20:39.965 { 00:20:39.965 "name": "BaseBdev3", 00:20:39.965 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:39.965 "is_configured": true, 00:20:39.965 "data_offset": 2048, 00:20:39.965 "data_size": 63488 00:20:39.965 }, 00:20:39.965 { 00:20:39.965 "name": "BaseBdev4", 00:20:39.965 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:39.965 "is_configured": true, 00:20:39.965 "data_offset": 2048, 00:20:39.965 "data_size": 63488 00:20:39.965 } 00:20:39.965 ] 00:20:39.965 }' 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.965 "name": "raid_bdev1", 00:20:39.965 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:39.965 "strip_size_kb": 64, 00:20:39.965 "state": "online", 00:20:39.965 "raid_level": "raid5f", 00:20:39.965 "superblock": true, 00:20:39.965 "num_base_bdevs": 4, 00:20:39.965 "num_base_bdevs_discovered": 4, 00:20:39.965 "num_base_bdevs_operational": 4, 00:20:39.965 "base_bdevs_list": [ 00:20:39.965 { 00:20:39.965 "name": "spare", 00:20:39.965 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:39.965 "is_configured": true, 00:20:39.965 "data_offset": 2048, 00:20:39.965 "data_size": 63488 00:20:39.965 }, 00:20:39.965 { 00:20:39.965 "name": "BaseBdev2", 00:20:39.965 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:39.965 "is_configured": true, 00:20:39.965 "data_offset": 2048, 00:20:39.965 "data_size": 63488 00:20:39.965 }, 00:20:39.965 { 00:20:39.965 "name": "BaseBdev3", 00:20:39.965 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:39.965 "is_configured": true, 00:20:39.965 "data_offset": 2048, 00:20:39.965 "data_size": 63488 00:20:39.965 }, 00:20:39.965 { 00:20:39.965 "name": "BaseBdev4", 00:20:39.965 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:39.965 "is_configured": true, 00:20:39.965 "data_offset": 2048, 00:20:39.965 "data_size": 63488 00:20:39.965 } 00:20:39.965 ] 00:20:39.965 }' 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:39.965 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.225 "name": "raid_bdev1", 00:20:40.225 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:40.225 "strip_size_kb": 64, 00:20:40.225 "state": "online", 00:20:40.225 "raid_level": "raid5f", 00:20:40.225 "superblock": true, 00:20:40.225 "num_base_bdevs": 4, 00:20:40.225 "num_base_bdevs_discovered": 4, 00:20:40.225 "num_base_bdevs_operational": 4, 00:20:40.225 "base_bdevs_list": [ 00:20:40.225 { 00:20:40.225 "name": "spare", 00:20:40.225 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:40.225 "is_configured": true, 00:20:40.225 "data_offset": 2048, 00:20:40.225 "data_size": 63488 00:20:40.225 }, 00:20:40.225 { 00:20:40.225 "name": "BaseBdev2", 00:20:40.225 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:40.225 "is_configured": true, 00:20:40.225 "data_offset": 2048, 00:20:40.225 "data_size": 63488 00:20:40.225 }, 00:20:40.225 { 00:20:40.225 "name": "BaseBdev3", 00:20:40.225 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:40.225 "is_configured": true, 00:20:40.225 "data_offset": 2048, 00:20:40.225 "data_size": 63488 00:20:40.225 }, 00:20:40.225 { 00:20:40.225 "name": "BaseBdev4", 00:20:40.225 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:40.225 "is_configured": true, 00:20:40.225 "data_offset": 2048, 00:20:40.225 "data_size": 63488 00:20:40.225 } 00:20:40.225 ] 00:20:40.225 }' 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.225 08:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.792 [2024-11-27 08:52:37.285919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:40.792 [2024-11-27 08:52:37.286148] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:40.792 [2024-11-27 08:52:37.286430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:40.792 [2024-11-27 08:52:37.286693] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:40.792 [2024-11-27 08:52:37.286866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:40.792 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:41.051 /dev/nbd0 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local i 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # break 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:41.051 1+0 records in 00:20:41.051 1+0 records out 00:20:41.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308355 s, 13.3 MB/s 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # size=4096 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # return 0 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:41.051 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:41.310 /dev/nbd1 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local i 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # break 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:41.310 1+0 records in 00:20:41.310 1+0 records out 00:20:41.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381644 s, 10.7 MB/s 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # size=4096 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # return 0 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:41.310 08:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:41.569 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:41.569 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:41.569 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:41.569 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:41.569 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:41.569 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:41.569 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:41.835 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:41.835 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:41.835 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:41.835 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:41.835 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:41.835 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:41.835 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:41.835 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:41.835 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:41.835 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:42.100 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:42.100 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:42.100 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:42.100 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:42.100 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:42.100 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:42.100 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:42.100 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:42.100 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:42.100 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:42.100 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.100 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.360 [2024-11-27 08:52:38.870822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:42.360 [2024-11-27 08:52:38.870926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.360 [2024-11-27 08:52:38.870968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:42.360 [2024-11-27 08:52:38.870984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.360 [2024-11-27 08:52:38.874481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.360 [2024-11-27 08:52:38.874668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:42.360 [2024-11-27 08:52:38.874808] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:42.360 [2024-11-27 08:52:38.874895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:42.360 [2024-11-27 08:52:38.875144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:42.360 [2024-11-27 08:52:38.875284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:42.360 spare 00:20:42.360 [2024-11-27 08:52:38.875430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.360 [2024-11-27 08:52:38.975552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:42.360 [2024-11-27 08:52:38.975589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:42.360 [2024-11-27 08:52:38.975940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:20:42.360 [2024-11-27 08:52:38.982493] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:42.360 [2024-11-27 08:52:38.982520] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:42.360 [2024-11-27 08:52:38.982758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.360 08:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.360 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.360 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.360 "name": "raid_bdev1", 00:20:42.360 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:42.360 "strip_size_kb": 64, 00:20:42.360 "state": "online", 00:20:42.360 "raid_level": "raid5f", 00:20:42.360 "superblock": true, 00:20:42.360 "num_base_bdevs": 4, 00:20:42.360 "num_base_bdevs_discovered": 4, 00:20:42.360 "num_base_bdevs_operational": 4, 00:20:42.360 "base_bdevs_list": [ 00:20:42.360 { 00:20:42.360 "name": "spare", 00:20:42.360 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:42.360 "is_configured": true, 00:20:42.360 "data_offset": 2048, 00:20:42.360 "data_size": 63488 00:20:42.360 }, 00:20:42.360 { 00:20:42.360 "name": "BaseBdev2", 00:20:42.360 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:42.360 "is_configured": true, 00:20:42.360 "data_offset": 2048, 00:20:42.360 "data_size": 63488 00:20:42.360 }, 00:20:42.360 { 00:20:42.360 "name": "BaseBdev3", 00:20:42.360 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:42.360 "is_configured": true, 00:20:42.360 "data_offset": 2048, 00:20:42.360 "data_size": 63488 00:20:42.360 }, 00:20:42.360 { 00:20:42.360 "name": "BaseBdev4", 00:20:42.360 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:42.360 "is_configured": true, 00:20:42.360 "data_offset": 2048, 00:20:42.360 "data_size": 63488 00:20:42.360 } 00:20:42.360 ] 00:20:42.360 }' 00:20:42.360 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.360 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:42.929 "name": "raid_bdev1", 00:20:42.929 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:42.929 "strip_size_kb": 64, 00:20:42.929 "state": "online", 00:20:42.929 "raid_level": "raid5f", 00:20:42.929 "superblock": true, 00:20:42.929 "num_base_bdevs": 4, 00:20:42.929 "num_base_bdevs_discovered": 4, 00:20:42.929 "num_base_bdevs_operational": 4, 00:20:42.929 "base_bdevs_list": [ 00:20:42.929 { 00:20:42.929 "name": "spare", 00:20:42.929 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:42.929 "is_configured": true, 00:20:42.929 "data_offset": 2048, 00:20:42.929 "data_size": 63488 00:20:42.929 }, 00:20:42.929 { 00:20:42.929 "name": "BaseBdev2", 00:20:42.929 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:42.929 "is_configured": true, 00:20:42.929 "data_offset": 2048, 00:20:42.929 "data_size": 63488 00:20:42.929 }, 00:20:42.929 { 00:20:42.929 "name": "BaseBdev3", 00:20:42.929 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:42.929 "is_configured": true, 00:20:42.929 "data_offset": 2048, 00:20:42.929 "data_size": 63488 00:20:42.929 }, 00:20:42.929 { 00:20:42.929 "name": "BaseBdev4", 00:20:42.929 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:42.929 "is_configured": true, 00:20:42.929 "data_offset": 2048, 00:20:42.929 "data_size": 63488 00:20:42.929 } 00:20:42.929 ] 00:20:42.929 }' 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.929 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.188 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:43.188 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:43.188 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.188 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.189 [2024-11-27 08:52:39.698938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.189 "name": "raid_bdev1", 00:20:43.189 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:43.189 "strip_size_kb": 64, 00:20:43.189 "state": "online", 00:20:43.189 "raid_level": "raid5f", 00:20:43.189 "superblock": true, 00:20:43.189 "num_base_bdevs": 4, 00:20:43.189 "num_base_bdevs_discovered": 3, 00:20:43.189 "num_base_bdevs_operational": 3, 00:20:43.189 "base_bdevs_list": [ 00:20:43.189 { 00:20:43.189 "name": null, 00:20:43.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.189 "is_configured": false, 00:20:43.189 "data_offset": 0, 00:20:43.189 "data_size": 63488 00:20:43.189 }, 00:20:43.189 { 00:20:43.189 "name": "BaseBdev2", 00:20:43.189 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:43.189 "is_configured": true, 00:20:43.189 "data_offset": 2048, 00:20:43.189 "data_size": 63488 00:20:43.189 }, 00:20:43.189 { 00:20:43.189 "name": "BaseBdev3", 00:20:43.189 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:43.189 "is_configured": true, 00:20:43.189 "data_offset": 2048, 00:20:43.189 "data_size": 63488 00:20:43.189 }, 00:20:43.189 { 00:20:43.189 "name": "BaseBdev4", 00:20:43.189 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:43.189 "is_configured": true, 00:20:43.189 "data_offset": 2048, 00:20:43.189 "data_size": 63488 00:20:43.189 } 00:20:43.189 ] 00:20:43.189 }' 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.189 08:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.447 08:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:43.447 08:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.447 08:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.447 [2024-11-27 08:52:40.199145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:43.447 [2024-11-27 08:52:40.199610] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:43.447 [2024-11-27 08:52:40.199653] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:43.447 [2024-11-27 08:52:40.199706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:43.706 [2024-11-27 08:52:40.213499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:20:43.706 08:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.706 08:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:43.706 [2024-11-27 08:52:40.222643] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:44.641 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:44.641 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:44.641 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:44.641 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:44.641 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:44.641 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.641 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.641 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.641 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.641 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.641 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:44.641 "name": "raid_bdev1", 00:20:44.641 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:44.641 "strip_size_kb": 64, 00:20:44.641 "state": "online", 00:20:44.641 "raid_level": "raid5f", 00:20:44.641 "superblock": true, 00:20:44.641 "num_base_bdevs": 4, 00:20:44.641 "num_base_bdevs_discovered": 4, 00:20:44.642 "num_base_bdevs_operational": 4, 00:20:44.642 "process": { 00:20:44.642 "type": "rebuild", 00:20:44.642 "target": "spare", 00:20:44.642 "progress": { 00:20:44.642 "blocks": 17280, 00:20:44.642 "percent": 9 00:20:44.642 } 00:20:44.642 }, 00:20:44.642 "base_bdevs_list": [ 00:20:44.642 { 00:20:44.642 "name": "spare", 00:20:44.642 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:44.642 "is_configured": true, 00:20:44.642 "data_offset": 2048, 00:20:44.642 "data_size": 63488 00:20:44.642 }, 00:20:44.642 { 00:20:44.642 "name": "BaseBdev2", 00:20:44.642 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:44.642 "is_configured": true, 00:20:44.642 "data_offset": 2048, 00:20:44.642 "data_size": 63488 00:20:44.642 }, 00:20:44.642 { 00:20:44.642 "name": "BaseBdev3", 00:20:44.642 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:44.642 "is_configured": true, 00:20:44.642 "data_offset": 2048, 00:20:44.642 "data_size": 63488 00:20:44.642 }, 00:20:44.642 { 00:20:44.642 "name": "BaseBdev4", 00:20:44.642 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:44.642 "is_configured": true, 00:20:44.642 "data_offset": 2048, 00:20:44.642 "data_size": 63488 00:20:44.642 } 00:20:44.642 ] 00:20:44.642 }' 00:20:44.642 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:44.642 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:44.642 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:44.642 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:44.642 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:44.642 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.642 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.642 [2024-11-27 08:52:41.383792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:44.900 [2024-11-27 08:52:41.434661] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:44.900 [2024-11-27 08:52:41.434965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.900 [2024-11-27 08:52:41.435158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:44.900 [2024-11-27 08:52:41.435288] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.900 "name": "raid_bdev1", 00:20:44.900 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:44.900 "strip_size_kb": 64, 00:20:44.900 "state": "online", 00:20:44.900 "raid_level": "raid5f", 00:20:44.900 "superblock": true, 00:20:44.900 "num_base_bdevs": 4, 00:20:44.900 "num_base_bdevs_discovered": 3, 00:20:44.900 "num_base_bdevs_operational": 3, 00:20:44.900 "base_bdevs_list": [ 00:20:44.900 { 00:20:44.900 "name": null, 00:20:44.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.900 "is_configured": false, 00:20:44.900 "data_offset": 0, 00:20:44.900 "data_size": 63488 00:20:44.900 }, 00:20:44.900 { 00:20:44.900 "name": "BaseBdev2", 00:20:44.900 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:44.900 "is_configured": true, 00:20:44.900 "data_offset": 2048, 00:20:44.900 "data_size": 63488 00:20:44.900 }, 00:20:44.900 { 00:20:44.900 "name": "BaseBdev3", 00:20:44.900 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:44.900 "is_configured": true, 00:20:44.900 "data_offset": 2048, 00:20:44.900 "data_size": 63488 00:20:44.900 }, 00:20:44.900 { 00:20:44.900 "name": "BaseBdev4", 00:20:44.900 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:44.900 "is_configured": true, 00:20:44.900 "data_offset": 2048, 00:20:44.900 "data_size": 63488 00:20:44.900 } 00:20:44.900 ] 00:20:44.900 }' 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.900 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.467 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:45.467 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.467 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.467 [2024-11-27 08:52:41.968185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:45.467 [2024-11-27 08:52:41.968292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.467 [2024-11-27 08:52:41.968338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:45.467 [2024-11-27 08:52:41.968399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.467 [2024-11-27 08:52:41.969094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.467 [2024-11-27 08:52:41.969144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:45.467 [2024-11-27 08:52:41.969295] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:45.467 [2024-11-27 08:52:41.969322] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:45.467 [2024-11-27 08:52:41.969337] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:45.467 [2024-11-27 08:52:41.969564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:45.467 [2024-11-27 08:52:41.983471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:20:45.467 spare 00:20:45.467 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.467 08:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:45.467 [2024-11-27 08:52:41.992256] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:46.403 08:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:46.403 08:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:46.403 08:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:46.403 08:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:46.403 08:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:46.403 08:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.403 08:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.403 08:52:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.403 08:52:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.403 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.404 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:46.404 "name": "raid_bdev1", 00:20:46.404 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:46.404 "strip_size_kb": 64, 00:20:46.404 "state": "online", 00:20:46.404 "raid_level": "raid5f", 00:20:46.404 "superblock": true, 00:20:46.404 "num_base_bdevs": 4, 00:20:46.404 "num_base_bdevs_discovered": 4, 00:20:46.404 "num_base_bdevs_operational": 4, 00:20:46.404 "process": { 00:20:46.404 "type": "rebuild", 00:20:46.404 "target": "spare", 00:20:46.404 "progress": { 00:20:46.404 "blocks": 17280, 00:20:46.404 "percent": 9 00:20:46.404 } 00:20:46.404 }, 00:20:46.404 "base_bdevs_list": [ 00:20:46.404 { 00:20:46.404 "name": "spare", 00:20:46.404 "uuid": "89c9eb94-1591-5891-8060-2220708671f0", 00:20:46.404 "is_configured": true, 00:20:46.404 "data_offset": 2048, 00:20:46.404 "data_size": 63488 00:20:46.404 }, 00:20:46.404 { 00:20:46.404 "name": "BaseBdev2", 00:20:46.404 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:46.404 "is_configured": true, 00:20:46.404 "data_offset": 2048, 00:20:46.404 "data_size": 63488 00:20:46.404 }, 00:20:46.404 { 00:20:46.404 "name": "BaseBdev3", 00:20:46.404 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:46.404 "is_configured": true, 00:20:46.404 "data_offset": 2048, 00:20:46.404 "data_size": 63488 00:20:46.404 }, 00:20:46.404 { 00:20:46.404 "name": "BaseBdev4", 00:20:46.404 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:46.404 "is_configured": true, 00:20:46.404 "data_offset": 2048, 00:20:46.404 "data_size": 63488 00:20:46.404 } 00:20:46.404 ] 00:20:46.404 }' 00:20:46.404 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:46.404 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:46.404 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:46.404 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:46.404 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:46.404 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.404 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.404 [2024-11-27 08:52:43.153737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:46.662 [2024-11-27 08:52:43.204652] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:46.662 [2024-11-27 08:52:43.204899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.662 [2024-11-27 08:52:43.204938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:46.662 [2024-11-27 08:52:43.204952] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.662 "name": "raid_bdev1", 00:20:46.662 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:46.662 "strip_size_kb": 64, 00:20:46.662 "state": "online", 00:20:46.662 "raid_level": "raid5f", 00:20:46.662 "superblock": true, 00:20:46.662 "num_base_bdevs": 4, 00:20:46.662 "num_base_bdevs_discovered": 3, 00:20:46.662 "num_base_bdevs_operational": 3, 00:20:46.662 "base_bdevs_list": [ 00:20:46.662 { 00:20:46.662 "name": null, 00:20:46.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.662 "is_configured": false, 00:20:46.662 "data_offset": 0, 00:20:46.662 "data_size": 63488 00:20:46.662 }, 00:20:46.662 { 00:20:46.662 "name": "BaseBdev2", 00:20:46.662 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:46.662 "is_configured": true, 00:20:46.662 "data_offset": 2048, 00:20:46.662 "data_size": 63488 00:20:46.662 }, 00:20:46.662 { 00:20:46.662 "name": "BaseBdev3", 00:20:46.662 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:46.662 "is_configured": true, 00:20:46.662 "data_offset": 2048, 00:20:46.662 "data_size": 63488 00:20:46.662 }, 00:20:46.662 { 00:20:46.662 "name": "BaseBdev4", 00:20:46.662 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:46.662 "is_configured": true, 00:20:46.662 "data_offset": 2048, 00:20:46.662 "data_size": 63488 00:20:46.662 } 00:20:46.662 ] 00:20:46.662 }' 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.662 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.230 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:47.230 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:47.230 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:47.230 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:47.230 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:47.230 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.230 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.230 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.230 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.230 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.230 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:47.230 "name": "raid_bdev1", 00:20:47.230 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:47.230 "strip_size_kb": 64, 00:20:47.230 "state": "online", 00:20:47.230 "raid_level": "raid5f", 00:20:47.230 "superblock": true, 00:20:47.230 "num_base_bdevs": 4, 00:20:47.230 "num_base_bdevs_discovered": 3, 00:20:47.230 "num_base_bdevs_operational": 3, 00:20:47.230 "base_bdevs_list": [ 00:20:47.230 { 00:20:47.230 "name": null, 00:20:47.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.230 "is_configured": false, 00:20:47.230 "data_offset": 0, 00:20:47.230 "data_size": 63488 00:20:47.230 }, 00:20:47.230 { 00:20:47.231 "name": "BaseBdev2", 00:20:47.231 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:47.231 "is_configured": true, 00:20:47.231 "data_offset": 2048, 00:20:47.231 "data_size": 63488 00:20:47.231 }, 00:20:47.231 { 00:20:47.231 "name": "BaseBdev3", 00:20:47.231 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:47.231 "is_configured": true, 00:20:47.231 "data_offset": 2048, 00:20:47.231 "data_size": 63488 00:20:47.231 }, 00:20:47.231 { 00:20:47.231 "name": "BaseBdev4", 00:20:47.231 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:47.231 "is_configured": true, 00:20:47.231 "data_offset": 2048, 00:20:47.231 "data_size": 63488 00:20:47.231 } 00:20:47.231 ] 00:20:47.231 }' 00:20:47.231 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:47.231 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:47.231 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:47.231 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:47.231 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:47.231 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.231 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.231 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.231 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:47.231 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.231 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.231 [2024-11-27 08:52:43.901873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:47.231 [2024-11-27 08:52:43.901999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.231 [2024-11-27 08:52:43.902044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:20:47.231 [2024-11-27 08:52:43.902061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.231 [2024-11-27 08:52:43.902737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.231 [2024-11-27 08:52:43.902770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:47.231 [2024-11-27 08:52:43.902886] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:47.231 [2024-11-27 08:52:43.902909] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:47.231 [2024-11-27 08:52:43.902925] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:47.231 [2024-11-27 08:52:43.902940] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:47.231 BaseBdev1 00:20:47.231 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.231 08:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:48.167 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:48.167 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.167 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.167 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:48.167 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.167 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:48.167 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.167 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.167 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.167 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.167 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.167 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.167 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.167 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.426 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.426 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.426 "name": "raid_bdev1", 00:20:48.426 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:48.426 "strip_size_kb": 64, 00:20:48.426 "state": "online", 00:20:48.426 "raid_level": "raid5f", 00:20:48.426 "superblock": true, 00:20:48.426 "num_base_bdevs": 4, 00:20:48.426 "num_base_bdevs_discovered": 3, 00:20:48.426 "num_base_bdevs_operational": 3, 00:20:48.426 "base_bdevs_list": [ 00:20:48.426 { 00:20:48.426 "name": null, 00:20:48.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.426 "is_configured": false, 00:20:48.426 "data_offset": 0, 00:20:48.426 "data_size": 63488 00:20:48.426 }, 00:20:48.426 { 00:20:48.426 "name": "BaseBdev2", 00:20:48.426 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:48.426 "is_configured": true, 00:20:48.426 "data_offset": 2048, 00:20:48.426 "data_size": 63488 00:20:48.426 }, 00:20:48.426 { 00:20:48.426 "name": "BaseBdev3", 00:20:48.426 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:48.426 "is_configured": true, 00:20:48.426 "data_offset": 2048, 00:20:48.426 "data_size": 63488 00:20:48.426 }, 00:20:48.426 { 00:20:48.426 "name": "BaseBdev4", 00:20:48.426 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:48.426 "is_configured": true, 00:20:48.426 "data_offset": 2048, 00:20:48.426 "data_size": 63488 00:20:48.426 } 00:20:48.426 ] 00:20:48.426 }' 00:20:48.426 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.426 08:52:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.684 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:48.684 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:48.684 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:48.684 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:48.684 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:48.684 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.684 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.684 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.684 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.684 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:48.942 "name": "raid_bdev1", 00:20:48.942 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:48.942 "strip_size_kb": 64, 00:20:48.942 "state": "online", 00:20:48.942 "raid_level": "raid5f", 00:20:48.942 "superblock": true, 00:20:48.942 "num_base_bdevs": 4, 00:20:48.942 "num_base_bdevs_discovered": 3, 00:20:48.942 "num_base_bdevs_operational": 3, 00:20:48.942 "base_bdevs_list": [ 00:20:48.942 { 00:20:48.942 "name": null, 00:20:48.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.942 "is_configured": false, 00:20:48.942 "data_offset": 0, 00:20:48.942 "data_size": 63488 00:20:48.942 }, 00:20:48.942 { 00:20:48.942 "name": "BaseBdev2", 00:20:48.942 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:48.942 "is_configured": true, 00:20:48.942 "data_offset": 2048, 00:20:48.942 "data_size": 63488 00:20:48.942 }, 00:20:48.942 { 00:20:48.942 "name": "BaseBdev3", 00:20:48.942 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:48.942 "is_configured": true, 00:20:48.942 "data_offset": 2048, 00:20:48.942 "data_size": 63488 00:20:48.942 }, 00:20:48.942 { 00:20:48.942 "name": "BaseBdev4", 00:20:48.942 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:48.942 "is_configured": true, 00:20:48.942 "data_offset": 2048, 00:20:48.942 "data_size": 63488 00:20:48.942 } 00:20:48.942 ] 00:20:48.942 }' 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.942 [2024-11-27 08:52:45.562453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:48.942 [2024-11-27 08:52:45.562696] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:48.942 [2024-11-27 08:52:45.562726] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:48.942 request: 00:20:48.942 { 00:20:48.942 "base_bdev": "BaseBdev1", 00:20:48.942 "raid_bdev": "raid_bdev1", 00:20:48.942 "method": "bdev_raid_add_base_bdev", 00:20:48.942 "req_id": 1 00:20:48.942 } 00:20:48.942 Got JSON-RPC error response 00:20:48.942 response: 00:20:48.942 { 00:20:48.942 "code": -22, 00:20:48.942 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:48.942 } 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:20:48.942 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:48.943 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:48.943 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:48.943 08:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:49.876 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:49.876 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:49.876 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:49.876 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:49.876 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:49.876 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:49.876 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.876 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.876 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.876 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.876 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.876 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.876 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.876 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.876 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.134 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.135 "name": "raid_bdev1", 00:20:50.135 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:50.135 "strip_size_kb": 64, 00:20:50.135 "state": "online", 00:20:50.135 "raid_level": "raid5f", 00:20:50.135 "superblock": true, 00:20:50.135 "num_base_bdevs": 4, 00:20:50.135 "num_base_bdevs_discovered": 3, 00:20:50.135 "num_base_bdevs_operational": 3, 00:20:50.135 "base_bdevs_list": [ 00:20:50.135 { 00:20:50.135 "name": null, 00:20:50.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.135 "is_configured": false, 00:20:50.135 "data_offset": 0, 00:20:50.135 "data_size": 63488 00:20:50.135 }, 00:20:50.135 { 00:20:50.135 "name": "BaseBdev2", 00:20:50.135 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:50.135 "is_configured": true, 00:20:50.135 "data_offset": 2048, 00:20:50.135 "data_size": 63488 00:20:50.135 }, 00:20:50.135 { 00:20:50.135 "name": "BaseBdev3", 00:20:50.135 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:50.135 "is_configured": true, 00:20:50.135 "data_offset": 2048, 00:20:50.135 "data_size": 63488 00:20:50.135 }, 00:20:50.135 { 00:20:50.135 "name": "BaseBdev4", 00:20:50.135 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:50.135 "is_configured": true, 00:20:50.135 "data_offset": 2048, 00:20:50.135 "data_size": 63488 00:20:50.135 } 00:20:50.135 ] 00:20:50.135 }' 00:20:50.135 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.135 08:52:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.393 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:50.393 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:50.393 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:50.393 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:50.393 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:50.393 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.393 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.393 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.393 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.393 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.651 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.651 "name": "raid_bdev1", 00:20:50.651 "uuid": "1346ba62-010d-4366-a29b-80c36cfb6445", 00:20:50.651 "strip_size_kb": 64, 00:20:50.651 "state": "online", 00:20:50.651 "raid_level": "raid5f", 00:20:50.651 "superblock": true, 00:20:50.651 "num_base_bdevs": 4, 00:20:50.651 "num_base_bdevs_discovered": 3, 00:20:50.651 "num_base_bdevs_operational": 3, 00:20:50.651 "base_bdevs_list": [ 00:20:50.651 { 00:20:50.651 "name": null, 00:20:50.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.651 "is_configured": false, 00:20:50.651 "data_offset": 0, 00:20:50.651 "data_size": 63488 00:20:50.651 }, 00:20:50.651 { 00:20:50.651 "name": "BaseBdev2", 00:20:50.651 "uuid": "afc33184-30c9-5ffc-bb2a-135b3fa15e9c", 00:20:50.651 "is_configured": true, 00:20:50.651 "data_offset": 2048, 00:20:50.651 "data_size": 63488 00:20:50.651 }, 00:20:50.651 { 00:20:50.651 "name": "BaseBdev3", 00:20:50.652 "uuid": "58cf3409-55f2-5d32-997b-591068e8729c", 00:20:50.652 "is_configured": true, 00:20:50.652 "data_offset": 2048, 00:20:50.652 "data_size": 63488 00:20:50.652 }, 00:20:50.652 { 00:20:50.652 "name": "BaseBdev4", 00:20:50.652 "uuid": "77df82d8-dc8b-5583-8666-b2b3046393ca", 00:20:50.652 "is_configured": true, 00:20:50.652 "data_offset": 2048, 00:20:50.652 "data_size": 63488 00:20:50.652 } 00:20:50.652 ] 00:20:50.652 }' 00:20:50.652 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.652 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:50.652 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.652 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:50.652 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85653 00:20:50.652 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' -z 85653 ']' 00:20:50.652 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # kill -0 85653 00:20:50.652 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # uname 00:20:50.652 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:20:50.652 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 85653 00:20:50.652 killing process with pid 85653 00:20:50.652 Received shutdown signal, test time was about 60.000000 seconds 00:20:50.652 00:20:50.652 Latency(us) 00:20:50.652 [2024-11-27T08:52:47.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.652 [2024-11-27T08:52:47.412Z] =================================================================================================================== 00:20:50.652 [2024-11-27T08:52:47.412Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:50.652 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:20:50.652 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:20:50.652 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # echo 'killing process with pid 85653' 00:20:50.652 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # kill 85653 00:20:50.652 [2024-11-27 08:52:47.294139] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:50.652 08:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@975 -- # wait 85653 00:20:50.652 [2024-11-27 08:52:47.294323] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:50.652 [2024-11-27 08:52:47.294469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:50.652 [2024-11-27 08:52:47.294494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:51.220 [2024-11-27 08:52:47.751984] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:52.158 ************************************ 00:20:52.158 END TEST raid5f_rebuild_test_sb 00:20:52.158 ************************************ 00:20:52.158 08:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:52.158 00:20:52.158 real 0m28.812s 00:20:52.158 user 0m37.567s 00:20:52.158 sys 0m2.799s 00:20:52.158 08:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # xtrace_disable 00:20:52.158 08:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.158 08:52:48 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:20:52.158 08:52:48 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:20:52.158 08:52:48 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:20:52.158 08:52:48 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:20:52.158 08:52:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:52.158 ************************************ 00:20:52.158 START TEST raid_state_function_test_sb_4k 00:20:52.158 ************************************ 00:20:52.158 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # raid_state_function_test raid1 2 true 00:20:52.158 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:52.158 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:52.158 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:52.158 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:52.158 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:52.158 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:52.158 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:52.158 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:52.158 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:52.158 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:52.158 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:52.158 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:52.418 Process raid pid: 86478 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86478 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86478' 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86478 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@832 -- # '[' -z 86478 ']' 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local max_retries=100 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@841 -- # xtrace_disable 00:20:52.418 08:52:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:52.418 [2024-11-27 08:52:49.074651] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:20:52.419 [2024-11-27 08:52:49.075048] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.678 [2024-11-27 08:52:49.265284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.678 [2024-11-27 08:52:49.411167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.937 [2024-11-27 08:52:49.638160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:52.937 [2024-11-27 08:52:49.638225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@865 -- # return 0 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.504 [2024-11-27 08:52:50.045368] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:53.504 [2024-11-27 08:52:50.045432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:53.504 [2024-11-27 08:52:50.045451] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:53.504 [2024-11-27 08:52:50.045480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.504 "name": "Existed_Raid", 00:20:53.504 "uuid": "1dbeaf7b-e98f-4c62-bddc-16cf9facd690", 00:20:53.504 "strip_size_kb": 0, 00:20:53.504 "state": "configuring", 00:20:53.504 "raid_level": "raid1", 00:20:53.504 "superblock": true, 00:20:53.504 "num_base_bdevs": 2, 00:20:53.504 "num_base_bdevs_discovered": 0, 00:20:53.504 "num_base_bdevs_operational": 2, 00:20:53.504 "base_bdevs_list": [ 00:20:53.504 { 00:20:53.504 "name": "BaseBdev1", 00:20:53.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.504 "is_configured": false, 00:20:53.504 "data_offset": 0, 00:20:53.504 "data_size": 0 00:20:53.504 }, 00:20:53.504 { 00:20:53.504 "name": "BaseBdev2", 00:20:53.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.504 "is_configured": false, 00:20:53.504 "data_offset": 0, 00:20:53.504 "data_size": 0 00:20:53.504 } 00:20:53.504 ] 00:20:53.504 }' 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.504 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.070 [2024-11-27 08:52:50.569469] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:54.070 [2024-11-27 08:52:50.569663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.070 [2024-11-27 08:52:50.577423] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:54.070 [2024-11-27 08:52:50.577478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:54.070 [2024-11-27 08:52:50.577495] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:54.070 [2024-11-27 08:52:50.577515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.070 [2024-11-27 08:52:50.625861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.070 BaseBdev1 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local i 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.070 [ 00:20:54.070 { 00:20:54.070 "name": "BaseBdev1", 00:20:54.070 "aliases": [ 00:20:54.070 "3ed28639-f507-4ae3-bc8a-4ef1f1966e1b" 00:20:54.070 ], 00:20:54.070 "product_name": "Malloc disk", 00:20:54.070 "block_size": 4096, 00:20:54.070 "num_blocks": 8192, 00:20:54.070 "uuid": "3ed28639-f507-4ae3-bc8a-4ef1f1966e1b", 00:20:54.070 "assigned_rate_limits": { 00:20:54.070 "rw_ios_per_sec": 0, 00:20:54.070 "rw_mbytes_per_sec": 0, 00:20:54.070 "r_mbytes_per_sec": 0, 00:20:54.070 "w_mbytes_per_sec": 0 00:20:54.070 }, 00:20:54.070 "claimed": true, 00:20:54.070 "claim_type": "exclusive_write", 00:20:54.070 "zoned": false, 00:20:54.070 "supported_io_types": { 00:20:54.070 "read": true, 00:20:54.070 "write": true, 00:20:54.070 "unmap": true, 00:20:54.070 "flush": true, 00:20:54.070 "reset": true, 00:20:54.070 "nvme_admin": false, 00:20:54.070 "nvme_io": false, 00:20:54.070 "nvme_io_md": false, 00:20:54.070 "write_zeroes": true, 00:20:54.070 "zcopy": true, 00:20:54.070 "get_zone_info": false, 00:20:54.070 "zone_management": false, 00:20:54.070 "zone_append": false, 00:20:54.070 "compare": false, 00:20:54.070 "compare_and_write": false, 00:20:54.070 "abort": true, 00:20:54.070 "seek_hole": false, 00:20:54.070 "seek_data": false, 00:20:54.070 "copy": true, 00:20:54.070 "nvme_iov_md": false 00:20:54.070 }, 00:20:54.070 "memory_domains": [ 00:20:54.070 { 00:20:54.070 "dma_device_id": "system", 00:20:54.070 "dma_device_type": 1 00:20:54.070 }, 00:20:54.070 { 00:20:54.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.070 "dma_device_type": 2 00:20:54.070 } 00:20:54.070 ], 00:20:54.070 "driver_specific": {} 00:20:54.070 } 00:20:54.070 ] 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # return 0 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.070 "name": "Existed_Raid", 00:20:54.070 "uuid": "a9d077c9-9e40-4e32-aec0-b7398769ee1d", 00:20:54.070 "strip_size_kb": 0, 00:20:54.070 "state": "configuring", 00:20:54.070 "raid_level": "raid1", 00:20:54.070 "superblock": true, 00:20:54.070 "num_base_bdevs": 2, 00:20:54.070 "num_base_bdevs_discovered": 1, 00:20:54.070 "num_base_bdevs_operational": 2, 00:20:54.070 "base_bdevs_list": [ 00:20:54.070 { 00:20:54.070 "name": "BaseBdev1", 00:20:54.070 "uuid": "3ed28639-f507-4ae3-bc8a-4ef1f1966e1b", 00:20:54.070 "is_configured": true, 00:20:54.070 "data_offset": 256, 00:20:54.070 "data_size": 7936 00:20:54.070 }, 00:20:54.070 { 00:20:54.070 "name": "BaseBdev2", 00:20:54.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.070 "is_configured": false, 00:20:54.070 "data_offset": 0, 00:20:54.070 "data_size": 0 00:20:54.070 } 00:20:54.070 ] 00:20:54.070 }' 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.070 08:52:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.639 [2024-11-27 08:52:51.162063] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:54.639 [2024-11-27 08:52:51.162135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.639 [2024-11-27 08:52:51.170086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.639 [2024-11-27 08:52:51.172683] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:54.639 [2024-11-27 08:52:51.172740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.639 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.639 "name": "Existed_Raid", 00:20:54.639 "uuid": "b421dce3-c0b0-4115-9017-e660198063fd", 00:20:54.639 "strip_size_kb": 0, 00:20:54.639 "state": "configuring", 00:20:54.639 "raid_level": "raid1", 00:20:54.639 "superblock": true, 00:20:54.639 "num_base_bdevs": 2, 00:20:54.639 "num_base_bdevs_discovered": 1, 00:20:54.639 "num_base_bdevs_operational": 2, 00:20:54.639 "base_bdevs_list": [ 00:20:54.639 { 00:20:54.639 "name": "BaseBdev1", 00:20:54.639 "uuid": "3ed28639-f507-4ae3-bc8a-4ef1f1966e1b", 00:20:54.639 "is_configured": true, 00:20:54.639 "data_offset": 256, 00:20:54.639 "data_size": 7936 00:20:54.639 }, 00:20:54.639 { 00:20:54.639 "name": "BaseBdev2", 00:20:54.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.640 "is_configured": false, 00:20:54.640 "data_offset": 0, 00:20:54.640 "data_size": 0 00:20:54.640 } 00:20:54.640 ] 00:20:54.640 }' 00:20:54.640 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.640 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.207 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:20:55.207 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.207 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.207 [2024-11-27 08:52:51.716104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:55.207 [2024-11-27 08:52:51.716476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:55.207 [2024-11-27 08:52:51.716497] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:55.207 BaseBdev2 00:20:55.207 [2024-11-27 08:52:51.716839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:55.207 [2024-11-27 08:52:51.717056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:55.207 [2024-11-27 08:52:51.717078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:55.207 [2024-11-27 08:52:51.717269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.207 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.207 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:55.207 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:20:55.207 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:20:55.207 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local i 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.208 [ 00:20:55.208 { 00:20:55.208 "name": "BaseBdev2", 00:20:55.208 "aliases": [ 00:20:55.208 "9adadf84-97df-49cc-aeb8-96d0b63e2e06" 00:20:55.208 ], 00:20:55.208 "product_name": "Malloc disk", 00:20:55.208 "block_size": 4096, 00:20:55.208 "num_blocks": 8192, 00:20:55.208 "uuid": "9adadf84-97df-49cc-aeb8-96d0b63e2e06", 00:20:55.208 "assigned_rate_limits": { 00:20:55.208 "rw_ios_per_sec": 0, 00:20:55.208 "rw_mbytes_per_sec": 0, 00:20:55.208 "r_mbytes_per_sec": 0, 00:20:55.208 "w_mbytes_per_sec": 0 00:20:55.208 }, 00:20:55.208 "claimed": true, 00:20:55.208 "claim_type": "exclusive_write", 00:20:55.208 "zoned": false, 00:20:55.208 "supported_io_types": { 00:20:55.208 "read": true, 00:20:55.208 "write": true, 00:20:55.208 "unmap": true, 00:20:55.208 "flush": true, 00:20:55.208 "reset": true, 00:20:55.208 "nvme_admin": false, 00:20:55.208 "nvme_io": false, 00:20:55.208 "nvme_io_md": false, 00:20:55.208 "write_zeroes": true, 00:20:55.208 "zcopy": true, 00:20:55.208 "get_zone_info": false, 00:20:55.208 "zone_management": false, 00:20:55.208 "zone_append": false, 00:20:55.208 "compare": false, 00:20:55.208 "compare_and_write": false, 00:20:55.208 "abort": true, 00:20:55.208 "seek_hole": false, 00:20:55.208 "seek_data": false, 00:20:55.208 "copy": true, 00:20:55.208 "nvme_iov_md": false 00:20:55.208 }, 00:20:55.208 "memory_domains": [ 00:20:55.208 { 00:20:55.208 "dma_device_id": "system", 00:20:55.208 "dma_device_type": 1 00:20:55.208 }, 00:20:55.208 { 00:20:55.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.208 "dma_device_type": 2 00:20:55.208 } 00:20:55.208 ], 00:20:55.208 "driver_specific": {} 00:20:55.208 } 00:20:55.208 ] 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # return 0 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.208 "name": "Existed_Raid", 00:20:55.208 "uuid": "b421dce3-c0b0-4115-9017-e660198063fd", 00:20:55.208 "strip_size_kb": 0, 00:20:55.208 "state": "online", 00:20:55.208 "raid_level": "raid1", 00:20:55.208 "superblock": true, 00:20:55.208 "num_base_bdevs": 2, 00:20:55.208 "num_base_bdevs_discovered": 2, 00:20:55.208 "num_base_bdevs_operational": 2, 00:20:55.208 "base_bdevs_list": [ 00:20:55.208 { 00:20:55.208 "name": "BaseBdev1", 00:20:55.208 "uuid": "3ed28639-f507-4ae3-bc8a-4ef1f1966e1b", 00:20:55.208 "is_configured": true, 00:20:55.208 "data_offset": 256, 00:20:55.208 "data_size": 7936 00:20:55.208 }, 00:20:55.208 { 00:20:55.208 "name": "BaseBdev2", 00:20:55.208 "uuid": "9adadf84-97df-49cc-aeb8-96d0b63e2e06", 00:20:55.208 "is_configured": true, 00:20:55.208 "data_offset": 256, 00:20:55.208 "data_size": 7936 00:20:55.208 } 00:20:55.208 ] 00:20:55.208 }' 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.208 08:52:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.778 [2024-11-27 08:52:52.264739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:55.778 "name": "Existed_Raid", 00:20:55.778 "aliases": [ 00:20:55.778 "b421dce3-c0b0-4115-9017-e660198063fd" 00:20:55.778 ], 00:20:55.778 "product_name": "Raid Volume", 00:20:55.778 "block_size": 4096, 00:20:55.778 "num_blocks": 7936, 00:20:55.778 "uuid": "b421dce3-c0b0-4115-9017-e660198063fd", 00:20:55.778 "assigned_rate_limits": { 00:20:55.778 "rw_ios_per_sec": 0, 00:20:55.778 "rw_mbytes_per_sec": 0, 00:20:55.778 "r_mbytes_per_sec": 0, 00:20:55.778 "w_mbytes_per_sec": 0 00:20:55.778 }, 00:20:55.778 "claimed": false, 00:20:55.778 "zoned": false, 00:20:55.778 "supported_io_types": { 00:20:55.778 "read": true, 00:20:55.778 "write": true, 00:20:55.778 "unmap": false, 00:20:55.778 "flush": false, 00:20:55.778 "reset": true, 00:20:55.778 "nvme_admin": false, 00:20:55.778 "nvme_io": false, 00:20:55.778 "nvme_io_md": false, 00:20:55.778 "write_zeroes": true, 00:20:55.778 "zcopy": false, 00:20:55.778 "get_zone_info": false, 00:20:55.778 "zone_management": false, 00:20:55.778 "zone_append": false, 00:20:55.778 "compare": false, 00:20:55.778 "compare_and_write": false, 00:20:55.778 "abort": false, 00:20:55.778 "seek_hole": false, 00:20:55.778 "seek_data": false, 00:20:55.778 "copy": false, 00:20:55.778 "nvme_iov_md": false 00:20:55.778 }, 00:20:55.778 "memory_domains": [ 00:20:55.778 { 00:20:55.778 "dma_device_id": "system", 00:20:55.778 "dma_device_type": 1 00:20:55.778 }, 00:20:55.778 { 00:20:55.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.778 "dma_device_type": 2 00:20:55.778 }, 00:20:55.778 { 00:20:55.778 "dma_device_id": "system", 00:20:55.778 "dma_device_type": 1 00:20:55.778 }, 00:20:55.778 { 00:20:55.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.778 "dma_device_type": 2 00:20:55.778 } 00:20:55.778 ], 00:20:55.778 "driver_specific": { 00:20:55.778 "raid": { 00:20:55.778 "uuid": "b421dce3-c0b0-4115-9017-e660198063fd", 00:20:55.778 "strip_size_kb": 0, 00:20:55.778 "state": "online", 00:20:55.778 "raid_level": "raid1", 00:20:55.778 "superblock": true, 00:20:55.778 "num_base_bdevs": 2, 00:20:55.778 "num_base_bdevs_discovered": 2, 00:20:55.778 "num_base_bdevs_operational": 2, 00:20:55.778 "base_bdevs_list": [ 00:20:55.778 { 00:20:55.778 "name": "BaseBdev1", 00:20:55.778 "uuid": "3ed28639-f507-4ae3-bc8a-4ef1f1966e1b", 00:20:55.778 "is_configured": true, 00:20:55.778 "data_offset": 256, 00:20:55.778 "data_size": 7936 00:20:55.778 }, 00:20:55.778 { 00:20:55.778 "name": "BaseBdev2", 00:20:55.778 "uuid": "9adadf84-97df-49cc-aeb8-96d0b63e2e06", 00:20:55.778 "is_configured": true, 00:20:55.778 "data_offset": 256, 00:20:55.778 "data_size": 7936 00:20:55.778 } 00:20:55.778 ] 00:20:55.778 } 00:20:55.778 } 00:20:55.778 }' 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:55.778 BaseBdev2' 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.778 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.778 [2024-11-27 08:52:52.532445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.037 "name": "Existed_Raid", 00:20:56.037 "uuid": "b421dce3-c0b0-4115-9017-e660198063fd", 00:20:56.037 "strip_size_kb": 0, 00:20:56.037 "state": "online", 00:20:56.037 "raid_level": "raid1", 00:20:56.037 "superblock": true, 00:20:56.037 "num_base_bdevs": 2, 00:20:56.037 "num_base_bdevs_discovered": 1, 00:20:56.037 "num_base_bdevs_operational": 1, 00:20:56.037 "base_bdevs_list": [ 00:20:56.037 { 00:20:56.037 "name": null, 00:20:56.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.037 "is_configured": false, 00:20:56.037 "data_offset": 0, 00:20:56.037 "data_size": 7936 00:20:56.037 }, 00:20:56.037 { 00:20:56.037 "name": "BaseBdev2", 00:20:56.037 "uuid": "9adadf84-97df-49cc-aeb8-96d0b63e2e06", 00:20:56.037 "is_configured": true, 00:20:56.037 "data_offset": 256, 00:20:56.037 "data_size": 7936 00:20:56.037 } 00:20:56.037 ] 00:20:56.037 }' 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.037 08:52:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.647 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:56.647 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:56.647 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.647 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.647 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.647 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:56.647 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.647 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:56.647 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:56.647 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:56.647 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.647 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.647 [2024-11-27 08:52:53.209222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:56.647 [2024-11-27 08:52:53.209529] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:56.647 [2024-11-27 08:52:53.301506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.647 [2024-11-27 08:52:53.301596] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.647 [2024-11-27 08:52:53.301618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:56.647 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.647 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86478 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # '[' -z 86478 ']' 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # kill -0 86478 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # uname 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 86478 00:20:56.648 killing process with pid 86478 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # echo 'killing process with pid 86478' 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # kill 86478 00:20:56.648 [2024-11-27 08:52:53.392197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:56.648 08:52:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@975 -- # wait 86478 00:20:56.906 [2024-11-27 08:52:53.407494] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:57.840 08:52:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:20:57.840 00:20:57.840 real 0m5.576s 00:20:57.840 user 0m8.345s 00:20:57.840 sys 0m0.863s 00:20:57.840 08:52:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # xtrace_disable 00:20:57.840 08:52:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:57.840 ************************************ 00:20:57.840 END TEST raid_state_function_test_sb_4k 00:20:57.840 ************************************ 00:20:57.840 08:52:54 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:20:57.840 08:52:54 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:20:57.840 08:52:54 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:20:57.841 08:52:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:57.841 ************************************ 00:20:57.841 START TEST raid_superblock_test_4k 00:20:57.841 ************************************ 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # raid_superblock_test raid1 2 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86731 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86731 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@832 -- # '[' -z 86731 ']' 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local max_retries=100 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@841 -- # xtrace_disable 00:20:57.841 08:52:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:58.099 [2024-11-27 08:52:54.665620] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:20:58.099 [2024-11-27 08:52:54.666013] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86731 ] 00:20:58.099 [2024-11-27 08:52:54.845358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.358 [2024-11-27 08:52:54.990202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.616 [2024-11-27 08:52:55.212262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:58.616 [2024-11-27 08:52:55.212511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@865 -- # return 0 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.182 malloc1 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.182 [2024-11-27 08:52:55.706310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:59.182 [2024-11-27 08:52:55.706417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.182 [2024-11-27 08:52:55.706457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:59.182 [2024-11-27 08:52:55.706472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.182 [2024-11-27 08:52:55.709498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.182 [2024-11-27 08:52:55.709544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:59.182 pt1 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:59.182 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.183 malloc2 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.183 [2024-11-27 08:52:55.766036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:59.183 [2024-11-27 08:52:55.766114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.183 [2024-11-27 08:52:55.766148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:59.183 [2024-11-27 08:52:55.766163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.183 [2024-11-27 08:52:55.769179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.183 [2024-11-27 08:52:55.769361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:59.183 pt2 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.183 [2024-11-27 08:52:55.774212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:59.183 [2024-11-27 08:52:55.776869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:59.183 [2024-11-27 08:52:55.777236] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:59.183 [2024-11-27 08:52:55.777267] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:59.183 [2024-11-27 08:52:55.777640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:59.183 [2024-11-27 08:52:55.777859] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:59.183 [2024-11-27 08:52:55.777886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:59.183 [2024-11-27 08:52:55.778150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.183 "name": "raid_bdev1", 00:20:59.183 "uuid": "7524c3f7-1e53-4d6f-93d5-2348a4b28546", 00:20:59.183 "strip_size_kb": 0, 00:20:59.183 "state": "online", 00:20:59.183 "raid_level": "raid1", 00:20:59.183 "superblock": true, 00:20:59.183 "num_base_bdevs": 2, 00:20:59.183 "num_base_bdevs_discovered": 2, 00:20:59.183 "num_base_bdevs_operational": 2, 00:20:59.183 "base_bdevs_list": [ 00:20:59.183 { 00:20:59.183 "name": "pt1", 00:20:59.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:59.183 "is_configured": true, 00:20:59.183 "data_offset": 256, 00:20:59.183 "data_size": 7936 00:20:59.183 }, 00:20:59.183 { 00:20:59.183 "name": "pt2", 00:20:59.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:59.183 "is_configured": true, 00:20:59.183 "data_offset": 256, 00:20:59.183 "data_size": 7936 00:20:59.183 } 00:20:59.183 ] 00:20:59.183 }' 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.183 08:52:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.750 [2024-11-27 08:52:56.342779] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:59.750 "name": "raid_bdev1", 00:20:59.750 "aliases": [ 00:20:59.750 "7524c3f7-1e53-4d6f-93d5-2348a4b28546" 00:20:59.750 ], 00:20:59.750 "product_name": "Raid Volume", 00:20:59.750 "block_size": 4096, 00:20:59.750 "num_blocks": 7936, 00:20:59.750 "uuid": "7524c3f7-1e53-4d6f-93d5-2348a4b28546", 00:20:59.750 "assigned_rate_limits": { 00:20:59.750 "rw_ios_per_sec": 0, 00:20:59.750 "rw_mbytes_per_sec": 0, 00:20:59.750 "r_mbytes_per_sec": 0, 00:20:59.750 "w_mbytes_per_sec": 0 00:20:59.750 }, 00:20:59.750 "claimed": false, 00:20:59.750 "zoned": false, 00:20:59.750 "supported_io_types": { 00:20:59.750 "read": true, 00:20:59.750 "write": true, 00:20:59.750 "unmap": false, 00:20:59.750 "flush": false, 00:20:59.750 "reset": true, 00:20:59.750 "nvme_admin": false, 00:20:59.750 "nvme_io": false, 00:20:59.750 "nvme_io_md": false, 00:20:59.750 "write_zeroes": true, 00:20:59.750 "zcopy": false, 00:20:59.750 "get_zone_info": false, 00:20:59.750 "zone_management": false, 00:20:59.750 "zone_append": false, 00:20:59.750 "compare": false, 00:20:59.750 "compare_and_write": false, 00:20:59.750 "abort": false, 00:20:59.750 "seek_hole": false, 00:20:59.750 "seek_data": false, 00:20:59.750 "copy": false, 00:20:59.750 "nvme_iov_md": false 00:20:59.750 }, 00:20:59.750 "memory_domains": [ 00:20:59.750 { 00:20:59.750 "dma_device_id": "system", 00:20:59.750 "dma_device_type": 1 00:20:59.750 }, 00:20:59.750 { 00:20:59.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.750 "dma_device_type": 2 00:20:59.750 }, 00:20:59.750 { 00:20:59.750 "dma_device_id": "system", 00:20:59.750 "dma_device_type": 1 00:20:59.750 }, 00:20:59.750 { 00:20:59.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.750 "dma_device_type": 2 00:20:59.750 } 00:20:59.750 ], 00:20:59.750 "driver_specific": { 00:20:59.750 "raid": { 00:20:59.750 "uuid": "7524c3f7-1e53-4d6f-93d5-2348a4b28546", 00:20:59.750 "strip_size_kb": 0, 00:20:59.750 "state": "online", 00:20:59.750 "raid_level": "raid1", 00:20:59.750 "superblock": true, 00:20:59.750 "num_base_bdevs": 2, 00:20:59.750 "num_base_bdevs_discovered": 2, 00:20:59.750 "num_base_bdevs_operational": 2, 00:20:59.750 "base_bdevs_list": [ 00:20:59.750 { 00:20:59.750 "name": "pt1", 00:20:59.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:59.750 "is_configured": true, 00:20:59.750 "data_offset": 256, 00:20:59.750 "data_size": 7936 00:20:59.750 }, 00:20:59.750 { 00:20:59.750 "name": "pt2", 00:20:59.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:59.750 "is_configured": true, 00:20:59.750 "data_offset": 256, 00:20:59.750 "data_size": 7936 00:20:59.750 } 00:20:59.750 ] 00:20:59.750 } 00:20:59.750 } 00:20:59.750 }' 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:59.750 pt2' 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.750 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.009 [2024-11-27 08:52:56.618987] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7524c3f7-1e53-4d6f-93d5-2348a4b28546 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 7524c3f7-1e53-4d6f-93d5-2348a4b28546 ']' 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.009 [2024-11-27 08:52:56.670439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:00.009 [2024-11-27 08:52:56.670475] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:00.009 [2024-11-27 08:52:56.670594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:00.009 [2024-11-27 08:52:56.670682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:00.009 [2024-11-27 08:52:56.670706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:00.009 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:00.010 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:00.010 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:00.010 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.010 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.010 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.010 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:00.010 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:00.010 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.010 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.010 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.010 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:00.010 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:00.010 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.010 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.269 [2024-11-27 08:52:56.818537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:00.269 [2024-11-27 08:52:56.821214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:00.269 [2024-11-27 08:52:56.821306] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:00.269 [2024-11-27 08:52:56.821405] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:00.269 [2024-11-27 08:52:56.821440] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:00.269 [2024-11-27 08:52:56.821457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:00.269 request: 00:21:00.269 { 00:21:00.269 "name": "raid_bdev1", 00:21:00.269 "raid_level": "raid1", 00:21:00.269 "base_bdevs": [ 00:21:00.269 "malloc1", 00:21:00.269 "malloc2" 00:21:00.269 ], 00:21:00.269 "superblock": false, 00:21:00.269 "method": "bdev_raid_create", 00:21:00.269 "req_id": 1 00:21:00.269 } 00:21:00.269 Got JSON-RPC error response 00:21:00.269 response: 00:21:00.269 { 00:21:00.269 "code": -17, 00:21:00.269 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:00.269 } 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.269 [2024-11-27 08:52:56.886517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:00.269 [2024-11-27 08:52:56.886628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.269 [2024-11-27 08:52:56.886659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:00.269 [2024-11-27 08:52:56.886678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.269 [2024-11-27 08:52:56.889842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.269 [2024-11-27 08:52:56.889893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:00.269 [2024-11-27 08:52:56.890016] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:00.269 [2024-11-27 08:52:56.890105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:00.269 pt1 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.269 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.269 "name": "raid_bdev1", 00:21:00.269 "uuid": "7524c3f7-1e53-4d6f-93d5-2348a4b28546", 00:21:00.269 "strip_size_kb": 0, 00:21:00.269 "state": "configuring", 00:21:00.269 "raid_level": "raid1", 00:21:00.269 "superblock": true, 00:21:00.269 "num_base_bdevs": 2, 00:21:00.269 "num_base_bdevs_discovered": 1, 00:21:00.269 "num_base_bdevs_operational": 2, 00:21:00.269 "base_bdevs_list": [ 00:21:00.269 { 00:21:00.269 "name": "pt1", 00:21:00.269 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:00.269 "is_configured": true, 00:21:00.269 "data_offset": 256, 00:21:00.269 "data_size": 7936 00:21:00.269 }, 00:21:00.269 { 00:21:00.269 "name": null, 00:21:00.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:00.269 "is_configured": false, 00:21:00.269 "data_offset": 256, 00:21:00.269 "data_size": 7936 00:21:00.269 } 00:21:00.269 ] 00:21:00.269 }' 00:21:00.270 08:52:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.270 08:52:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.836 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:00.836 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:00.836 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:00.836 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:00.836 08:52:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.836 08:52:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.836 [2024-11-27 08:52:57.414700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:00.836 [2024-11-27 08:52:57.414800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.836 [2024-11-27 08:52:57.414837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:00.836 [2024-11-27 08:52:57.414857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.836 [2024-11-27 08:52:57.415547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.836 [2024-11-27 08:52:57.415589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:00.836 [2024-11-27 08:52:57.415708] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:00.836 [2024-11-27 08:52:57.415759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:00.836 [2024-11-27 08:52:57.415920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:00.836 [2024-11-27 08:52:57.415942] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:00.836 [2024-11-27 08:52:57.416258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:00.836 [2024-11-27 08:52:57.416488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:00.836 [2024-11-27 08:52:57.416506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:00.836 [2024-11-27 08:52:57.416687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.836 pt2 00:21:00.836 08:52:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.836 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:00.836 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:00.836 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:00.836 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.837 "name": "raid_bdev1", 00:21:00.837 "uuid": "7524c3f7-1e53-4d6f-93d5-2348a4b28546", 00:21:00.837 "strip_size_kb": 0, 00:21:00.837 "state": "online", 00:21:00.837 "raid_level": "raid1", 00:21:00.837 "superblock": true, 00:21:00.837 "num_base_bdevs": 2, 00:21:00.837 "num_base_bdevs_discovered": 2, 00:21:00.837 "num_base_bdevs_operational": 2, 00:21:00.837 "base_bdevs_list": [ 00:21:00.837 { 00:21:00.837 "name": "pt1", 00:21:00.837 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:00.837 "is_configured": true, 00:21:00.837 "data_offset": 256, 00:21:00.837 "data_size": 7936 00:21:00.837 }, 00:21:00.837 { 00:21:00.837 "name": "pt2", 00:21:00.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:00.837 "is_configured": true, 00:21:00.837 "data_offset": 256, 00:21:00.837 "data_size": 7936 00:21:00.837 } 00:21:00.837 ] 00:21:00.837 }' 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.837 08:52:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.403 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:01.403 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:01.403 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:01.403 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:01.403 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:01.403 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:01.403 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:01.403 08:52:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:01.403 08:52:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.403 08:52:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.403 [2024-11-27 08:52:57.983161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:01.403 08:52:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.403 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:01.403 "name": "raid_bdev1", 00:21:01.403 "aliases": [ 00:21:01.403 "7524c3f7-1e53-4d6f-93d5-2348a4b28546" 00:21:01.403 ], 00:21:01.403 "product_name": "Raid Volume", 00:21:01.403 "block_size": 4096, 00:21:01.403 "num_blocks": 7936, 00:21:01.403 "uuid": "7524c3f7-1e53-4d6f-93d5-2348a4b28546", 00:21:01.403 "assigned_rate_limits": { 00:21:01.403 "rw_ios_per_sec": 0, 00:21:01.403 "rw_mbytes_per_sec": 0, 00:21:01.403 "r_mbytes_per_sec": 0, 00:21:01.403 "w_mbytes_per_sec": 0 00:21:01.403 }, 00:21:01.403 "claimed": false, 00:21:01.403 "zoned": false, 00:21:01.403 "supported_io_types": { 00:21:01.403 "read": true, 00:21:01.403 "write": true, 00:21:01.403 "unmap": false, 00:21:01.403 "flush": false, 00:21:01.403 "reset": true, 00:21:01.403 "nvme_admin": false, 00:21:01.403 "nvme_io": false, 00:21:01.403 "nvme_io_md": false, 00:21:01.403 "write_zeroes": true, 00:21:01.403 "zcopy": false, 00:21:01.403 "get_zone_info": false, 00:21:01.403 "zone_management": false, 00:21:01.403 "zone_append": false, 00:21:01.403 "compare": false, 00:21:01.403 "compare_and_write": false, 00:21:01.403 "abort": false, 00:21:01.403 "seek_hole": false, 00:21:01.403 "seek_data": false, 00:21:01.403 "copy": false, 00:21:01.403 "nvme_iov_md": false 00:21:01.403 }, 00:21:01.403 "memory_domains": [ 00:21:01.403 { 00:21:01.403 "dma_device_id": "system", 00:21:01.403 "dma_device_type": 1 00:21:01.403 }, 00:21:01.403 { 00:21:01.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.403 "dma_device_type": 2 00:21:01.403 }, 00:21:01.403 { 00:21:01.403 "dma_device_id": "system", 00:21:01.403 "dma_device_type": 1 00:21:01.403 }, 00:21:01.403 { 00:21:01.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.403 "dma_device_type": 2 00:21:01.403 } 00:21:01.403 ], 00:21:01.403 "driver_specific": { 00:21:01.403 "raid": { 00:21:01.403 "uuid": "7524c3f7-1e53-4d6f-93d5-2348a4b28546", 00:21:01.403 "strip_size_kb": 0, 00:21:01.403 "state": "online", 00:21:01.403 "raid_level": "raid1", 00:21:01.403 "superblock": true, 00:21:01.403 "num_base_bdevs": 2, 00:21:01.403 "num_base_bdevs_discovered": 2, 00:21:01.403 "num_base_bdevs_operational": 2, 00:21:01.403 "base_bdevs_list": [ 00:21:01.403 { 00:21:01.403 "name": "pt1", 00:21:01.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:01.403 "is_configured": true, 00:21:01.403 "data_offset": 256, 00:21:01.403 "data_size": 7936 00:21:01.403 }, 00:21:01.403 { 00:21:01.403 "name": "pt2", 00:21:01.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:01.403 "is_configured": true, 00:21:01.403 "data_offset": 256, 00:21:01.403 "data_size": 7936 00:21:01.403 } 00:21:01.403 ] 00:21:01.403 } 00:21:01.403 } 00:21:01.403 }' 00:21:01.403 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:01.403 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:01.403 pt2' 00:21:01.403 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.403 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:01.403 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:01.403 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:01.403 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.403 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.403 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.403 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.662 [2024-11-27 08:52:58.247170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 7524c3f7-1e53-4d6f-93d5-2348a4b28546 '!=' 7524c3f7-1e53-4d6f-93d5-2348a4b28546 ']' 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.662 [2024-11-27 08:52:58.290896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.662 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.662 "name": "raid_bdev1", 00:21:01.662 "uuid": "7524c3f7-1e53-4d6f-93d5-2348a4b28546", 00:21:01.662 "strip_size_kb": 0, 00:21:01.662 "state": "online", 00:21:01.662 "raid_level": "raid1", 00:21:01.662 "superblock": true, 00:21:01.662 "num_base_bdevs": 2, 00:21:01.662 "num_base_bdevs_discovered": 1, 00:21:01.662 "num_base_bdevs_operational": 1, 00:21:01.662 "base_bdevs_list": [ 00:21:01.662 { 00:21:01.662 "name": null, 00:21:01.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.663 "is_configured": false, 00:21:01.663 "data_offset": 0, 00:21:01.663 "data_size": 7936 00:21:01.663 }, 00:21:01.663 { 00:21:01.663 "name": "pt2", 00:21:01.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:01.663 "is_configured": true, 00:21:01.663 "data_offset": 256, 00:21:01.663 "data_size": 7936 00:21:01.663 } 00:21:01.663 ] 00:21:01.663 }' 00:21:01.663 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.663 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.228 [2024-11-27 08:52:58.815030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:02.228 [2024-11-27 08:52:58.815069] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:02.228 [2024-11-27 08:52:58.815184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:02.228 [2024-11-27 08:52:58.815257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:02.228 [2024-11-27 08:52:58.815277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.228 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.228 [2024-11-27 08:52:58.890992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:02.228 [2024-11-27 08:52:58.891073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.228 [2024-11-27 08:52:58.891102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:02.228 [2024-11-27 08:52:58.891120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.228 [2024-11-27 08:52:58.894349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.228 [2024-11-27 08:52:58.894403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:02.228 [2024-11-27 08:52:58.894514] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:02.228 [2024-11-27 08:52:58.894585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:02.228 [2024-11-27 08:52:58.894742] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:02.228 [2024-11-27 08:52:58.894765] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:02.228 [2024-11-27 08:52:58.895056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:02.228 [2024-11-27 08:52:58.895265] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:02.228 [2024-11-27 08:52:58.895282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:02.228 pt2 00:21:02.228 [2024-11-27 08:52:58.895540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.229 "name": "raid_bdev1", 00:21:02.229 "uuid": "7524c3f7-1e53-4d6f-93d5-2348a4b28546", 00:21:02.229 "strip_size_kb": 0, 00:21:02.229 "state": "online", 00:21:02.229 "raid_level": "raid1", 00:21:02.229 "superblock": true, 00:21:02.229 "num_base_bdevs": 2, 00:21:02.229 "num_base_bdevs_discovered": 1, 00:21:02.229 "num_base_bdevs_operational": 1, 00:21:02.229 "base_bdevs_list": [ 00:21:02.229 { 00:21:02.229 "name": null, 00:21:02.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.229 "is_configured": false, 00:21:02.229 "data_offset": 256, 00:21:02.229 "data_size": 7936 00:21:02.229 }, 00:21:02.229 { 00:21:02.229 "name": "pt2", 00:21:02.229 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:02.229 "is_configured": true, 00:21:02.229 "data_offset": 256, 00:21:02.229 "data_size": 7936 00:21:02.229 } 00:21:02.229 ] 00:21:02.229 }' 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.229 08:52:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.795 [2024-11-27 08:52:59.415588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:02.795 [2024-11-27 08:52:59.415631] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:02.795 [2024-11-27 08:52:59.415741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:02.795 [2024-11-27 08:52:59.415822] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:02.795 [2024-11-27 08:52:59.415838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.795 [2024-11-27 08:52:59.475603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:02.795 [2024-11-27 08:52:59.475680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.795 [2024-11-27 08:52:59.475712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:02.795 [2024-11-27 08:52:59.475728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.795 [2024-11-27 08:52:59.478879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.795 [2024-11-27 08:52:59.478924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:02.795 [2024-11-27 08:52:59.479041] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:02.795 [2024-11-27 08:52:59.479107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:02.795 [2024-11-27 08:52:59.479284] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:02.795 [2024-11-27 08:52:59.479307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:02.795 [2024-11-27 08:52:59.479331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:02.795 [2024-11-27 08:52:59.479450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:02.795 [2024-11-27 08:52:59.479564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:02.795 [2024-11-27 08:52:59.479586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:02.795 [2024-11-27 08:52:59.479909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:02.795 [2024-11-27 08:52:59.480099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:02.795 [2024-11-27 08:52:59.480121] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:02.795 pt1 00:21:02.795 [2024-11-27 08:52:59.480375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.795 "name": "raid_bdev1", 00:21:02.795 "uuid": "7524c3f7-1e53-4d6f-93d5-2348a4b28546", 00:21:02.795 "strip_size_kb": 0, 00:21:02.795 "state": "online", 00:21:02.795 "raid_level": "raid1", 00:21:02.795 "superblock": true, 00:21:02.795 "num_base_bdevs": 2, 00:21:02.795 "num_base_bdevs_discovered": 1, 00:21:02.795 "num_base_bdevs_operational": 1, 00:21:02.795 "base_bdevs_list": [ 00:21:02.795 { 00:21:02.795 "name": null, 00:21:02.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.795 "is_configured": false, 00:21:02.795 "data_offset": 256, 00:21:02.795 "data_size": 7936 00:21:02.795 }, 00:21:02.795 { 00:21:02.795 "name": "pt2", 00:21:02.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:02.795 "is_configured": true, 00:21:02.795 "data_offset": 256, 00:21:02.795 "data_size": 7936 00:21:02.795 } 00:21:02.795 ] 00:21:02.795 }' 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.795 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.361 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:03.361 08:52:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:03.361 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.361 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.361 08:52:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.361 08:53:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:03.361 08:53:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:03.361 08:53:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.361 08:53:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.361 08:53:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:03.361 [2024-11-27 08:53:00.032112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:03.362 08:53:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.362 08:53:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 7524c3f7-1e53-4d6f-93d5-2348a4b28546 '!=' 7524c3f7-1e53-4d6f-93d5-2348a4b28546 ']' 00:21:03.362 08:53:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86731 00:21:03.362 08:53:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # '[' -z 86731 ']' 00:21:03.362 08:53:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # kill -0 86731 00:21:03.362 08:53:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # uname 00:21:03.362 08:53:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:21:03.362 08:53:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 86731 00:21:03.362 08:53:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:21:03.362 08:53:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:21:03.362 08:53:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # echo 'killing process with pid 86731' 00:21:03.362 killing process with pid 86731 00:21:03.362 08:53:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # kill 86731 00:21:03.362 [2024-11-27 08:53:00.113501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:03.362 08:53:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@975 -- # wait 86731 00:21:03.362 [2024-11-27 08:53:00.113633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:03.362 [2024-11-27 08:53:00.113708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:03.362 [2024-11-27 08:53:00.113732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:03.620 [2024-11-27 08:53:00.308156] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:05.008 08:53:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:21:05.008 00:21:05.008 real 0m6.864s 00:21:05.008 user 0m10.831s 00:21:05.008 sys 0m1.013s 00:21:05.008 08:53:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # xtrace_disable 00:21:05.008 08:53:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:05.008 ************************************ 00:21:05.008 END TEST raid_superblock_test_4k 00:21:05.008 ************************************ 00:21:05.008 08:53:01 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:21:05.008 08:53:01 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:21:05.008 08:53:01 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:21:05.008 08:53:01 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:21:05.008 08:53:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:05.008 ************************************ 00:21:05.008 START TEST raid_rebuild_test_sb_4k 00:21:05.008 ************************************ 00:21:05.008 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # raid_rebuild_test raid1 2 true false true 00:21:05.008 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:05.008 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87065 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87065 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@832 -- # '[' -z 87065 ']' 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local max_retries=100 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@841 -- # xtrace_disable 00:21:05.009 08:53:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:05.009 [2024-11-27 08:53:01.570878] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:21:05.009 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:05.009 Zero copy mechanism will not be used. 00:21:05.009 [2024-11-27 08:53:01.571270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87065 ] 00:21:05.009 [2024-11-27 08:53:01.748084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.267 [2024-11-27 08:53:01.893216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.525 [2024-11-27 08:53:02.114157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.525 [2024-11-27 08:53:02.114213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # return 0 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.092 BaseBdev1_malloc 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.092 [2024-11-27 08:53:02.626111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:06.092 [2024-11-27 08:53:02.626199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.092 [2024-11-27 08:53:02.626235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:06.092 [2024-11-27 08:53:02.626255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.092 [2024-11-27 08:53:02.629194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.092 [2024-11-27 08:53:02.629246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:06.092 BaseBdev1 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.092 BaseBdev2_malloc 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.092 [2024-11-27 08:53:02.685349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:06.092 [2024-11-27 08:53:02.685428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.092 [2024-11-27 08:53:02.685458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:06.092 [2024-11-27 08:53:02.685480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.092 [2024-11-27 08:53:02.688327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.092 [2024-11-27 08:53:02.688389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:06.092 BaseBdev2 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.092 spare_malloc 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.092 spare_delay 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.092 [2024-11-27 08:53:02.766899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:06.092 [2024-11-27 08:53:02.767113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.092 [2024-11-27 08:53:02.767186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:06.092 [2024-11-27 08:53:02.767300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.092 [2024-11-27 08:53:02.770296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.092 [2024-11-27 08:53:02.770491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:06.092 spare 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.092 [2024-11-27 08:53:02.774998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:06.092 [2024-11-27 08:53:02.777568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:06.092 [2024-11-27 08:53:02.777931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:06.092 [2024-11-27 08:53:02.777964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:06.092 [2024-11-27 08:53:02.778274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:06.092 [2024-11-27 08:53:02.778537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:06.092 [2024-11-27 08:53:02.778556] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:06.092 [2024-11-27 08:53:02.778743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.092 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.093 "name": "raid_bdev1", 00:21:06.093 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:06.093 "strip_size_kb": 0, 00:21:06.093 "state": "online", 00:21:06.093 "raid_level": "raid1", 00:21:06.093 "superblock": true, 00:21:06.093 "num_base_bdevs": 2, 00:21:06.093 "num_base_bdevs_discovered": 2, 00:21:06.093 "num_base_bdevs_operational": 2, 00:21:06.093 "base_bdevs_list": [ 00:21:06.093 { 00:21:06.093 "name": "BaseBdev1", 00:21:06.093 "uuid": "c5a3cbf7-b409-5b2e-b729-a26c2f7cc438", 00:21:06.093 "is_configured": true, 00:21:06.093 "data_offset": 256, 00:21:06.093 "data_size": 7936 00:21:06.093 }, 00:21:06.093 { 00:21:06.093 "name": "BaseBdev2", 00:21:06.093 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:06.093 "is_configured": true, 00:21:06.093 "data_offset": 256, 00:21:06.093 "data_size": 7936 00:21:06.093 } 00:21:06.093 ] 00:21:06.093 }' 00:21:06.093 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.093 08:53:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.660 [2024-11-27 08:53:03.299531] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.660 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:07.227 [2024-11-27 08:53:03.679319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:07.227 /dev/nbd0 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local i 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # break 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:07.227 1+0 records in 00:21:07.227 1+0 records out 00:21:07.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282685 s, 14.5 MB/s 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # size=4096 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # return 0 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:07.227 08:53:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:08.172 7936+0 records in 00:21:08.172 7936+0 records out 00:21:08.172 32505856 bytes (33 MB, 31 MiB) copied, 0.959626 s, 33.9 MB/s 00:21:08.172 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:08.172 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:08.172 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:08.172 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:08.172 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:08.172 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:08.172 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:08.430 [2024-11-27 08:53:04.942949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.430 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:08.430 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:08.430 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:08.430 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.431 [2024-11-27 08:53:04.971029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.431 08:53:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.431 08:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.431 "name": "raid_bdev1", 00:21:08.431 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:08.431 "strip_size_kb": 0, 00:21:08.431 "state": "online", 00:21:08.431 "raid_level": "raid1", 00:21:08.431 "superblock": true, 00:21:08.431 "num_base_bdevs": 2, 00:21:08.431 "num_base_bdevs_discovered": 1, 00:21:08.431 "num_base_bdevs_operational": 1, 00:21:08.431 "base_bdevs_list": [ 00:21:08.431 { 00:21:08.431 "name": null, 00:21:08.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.431 "is_configured": false, 00:21:08.431 "data_offset": 0, 00:21:08.431 "data_size": 7936 00:21:08.431 }, 00:21:08.431 { 00:21:08.431 "name": "BaseBdev2", 00:21:08.431 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:08.431 "is_configured": true, 00:21:08.431 "data_offset": 256, 00:21:08.431 "data_size": 7936 00:21:08.431 } 00:21:08.431 ] 00:21:08.431 }' 00:21:08.431 08:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.431 08:53:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.997 08:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:08.997 08:53:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.997 08:53:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.997 [2024-11-27 08:53:05.455220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:08.997 [2024-11-27 08:53:05.472866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:21:08.997 08:53:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.997 08:53:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:08.997 [2024-11-27 08:53:05.475482] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:09.934 "name": "raid_bdev1", 00:21:09.934 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:09.934 "strip_size_kb": 0, 00:21:09.934 "state": "online", 00:21:09.934 "raid_level": "raid1", 00:21:09.934 "superblock": true, 00:21:09.934 "num_base_bdevs": 2, 00:21:09.934 "num_base_bdevs_discovered": 2, 00:21:09.934 "num_base_bdevs_operational": 2, 00:21:09.934 "process": { 00:21:09.934 "type": "rebuild", 00:21:09.934 "target": "spare", 00:21:09.934 "progress": { 00:21:09.934 "blocks": 2560, 00:21:09.934 "percent": 32 00:21:09.934 } 00:21:09.934 }, 00:21:09.934 "base_bdevs_list": [ 00:21:09.934 { 00:21:09.934 "name": "spare", 00:21:09.934 "uuid": "b1cb2536-3506-5aec-9a3a-3c94a74c4113", 00:21:09.934 "is_configured": true, 00:21:09.934 "data_offset": 256, 00:21:09.934 "data_size": 7936 00:21:09.934 }, 00:21:09.934 { 00:21:09.934 "name": "BaseBdev2", 00:21:09.934 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:09.934 "is_configured": true, 00:21:09.934 "data_offset": 256, 00:21:09.934 "data_size": 7936 00:21:09.934 } 00:21:09.934 ] 00:21:09.934 }' 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.934 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:09.934 [2024-11-27 08:53:06.641107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:09.934 [2024-11-27 08:53:06.686926] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:09.934 [2024-11-27 08:53:06.687371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.934 [2024-11-27 08:53:06.687404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:09.934 [2024-11-27 08:53:06.687421] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.193 "name": "raid_bdev1", 00:21:10.193 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:10.193 "strip_size_kb": 0, 00:21:10.193 "state": "online", 00:21:10.193 "raid_level": "raid1", 00:21:10.193 "superblock": true, 00:21:10.193 "num_base_bdevs": 2, 00:21:10.193 "num_base_bdevs_discovered": 1, 00:21:10.193 "num_base_bdevs_operational": 1, 00:21:10.193 "base_bdevs_list": [ 00:21:10.193 { 00:21:10.193 "name": null, 00:21:10.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.193 "is_configured": false, 00:21:10.193 "data_offset": 0, 00:21:10.193 "data_size": 7936 00:21:10.193 }, 00:21:10.193 { 00:21:10.193 "name": "BaseBdev2", 00:21:10.193 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:10.193 "is_configured": true, 00:21:10.193 "data_offset": 256, 00:21:10.193 "data_size": 7936 00:21:10.193 } 00:21:10.193 ] 00:21:10.193 }' 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.193 08:53:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.758 "name": "raid_bdev1", 00:21:10.758 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:10.758 "strip_size_kb": 0, 00:21:10.758 "state": "online", 00:21:10.758 "raid_level": "raid1", 00:21:10.758 "superblock": true, 00:21:10.758 "num_base_bdevs": 2, 00:21:10.758 "num_base_bdevs_discovered": 1, 00:21:10.758 "num_base_bdevs_operational": 1, 00:21:10.758 "base_bdevs_list": [ 00:21:10.758 { 00:21:10.758 "name": null, 00:21:10.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.758 "is_configured": false, 00:21:10.758 "data_offset": 0, 00:21:10.758 "data_size": 7936 00:21:10.758 }, 00:21:10.758 { 00:21:10.758 "name": "BaseBdev2", 00:21:10.758 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:10.758 "is_configured": true, 00:21:10.758 "data_offset": 256, 00:21:10.758 "data_size": 7936 00:21:10.758 } 00:21:10.758 ] 00:21:10.758 }' 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.758 [2024-11-27 08:53:07.430518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:10.758 [2024-11-27 08:53:07.448945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.758 08:53:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:10.758 [2024-11-27 08:53:07.452001] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:12.132 "name": "raid_bdev1", 00:21:12.132 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:12.132 "strip_size_kb": 0, 00:21:12.132 "state": "online", 00:21:12.132 "raid_level": "raid1", 00:21:12.132 "superblock": true, 00:21:12.132 "num_base_bdevs": 2, 00:21:12.132 "num_base_bdevs_discovered": 2, 00:21:12.132 "num_base_bdevs_operational": 2, 00:21:12.132 "process": { 00:21:12.132 "type": "rebuild", 00:21:12.132 "target": "spare", 00:21:12.132 "progress": { 00:21:12.132 "blocks": 2560, 00:21:12.132 "percent": 32 00:21:12.132 } 00:21:12.132 }, 00:21:12.132 "base_bdevs_list": [ 00:21:12.132 { 00:21:12.132 "name": "spare", 00:21:12.132 "uuid": "b1cb2536-3506-5aec-9a3a-3c94a74c4113", 00:21:12.132 "is_configured": true, 00:21:12.132 "data_offset": 256, 00:21:12.132 "data_size": 7936 00:21:12.132 }, 00:21:12.132 { 00:21:12.132 "name": "BaseBdev2", 00:21:12.132 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:12.132 "is_configured": true, 00:21:12.132 "data_offset": 256, 00:21:12.132 "data_size": 7936 00:21:12.132 } 00:21:12.132 ] 00:21:12.132 }' 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:12.132 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=742 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:12.132 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.133 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:12.133 "name": "raid_bdev1", 00:21:12.133 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:12.133 "strip_size_kb": 0, 00:21:12.133 "state": "online", 00:21:12.133 "raid_level": "raid1", 00:21:12.133 "superblock": true, 00:21:12.133 "num_base_bdevs": 2, 00:21:12.133 "num_base_bdevs_discovered": 2, 00:21:12.133 "num_base_bdevs_operational": 2, 00:21:12.133 "process": { 00:21:12.133 "type": "rebuild", 00:21:12.133 "target": "spare", 00:21:12.133 "progress": { 00:21:12.133 "blocks": 2816, 00:21:12.133 "percent": 35 00:21:12.133 } 00:21:12.133 }, 00:21:12.133 "base_bdevs_list": [ 00:21:12.133 { 00:21:12.133 "name": "spare", 00:21:12.133 "uuid": "b1cb2536-3506-5aec-9a3a-3c94a74c4113", 00:21:12.133 "is_configured": true, 00:21:12.133 "data_offset": 256, 00:21:12.133 "data_size": 7936 00:21:12.133 }, 00:21:12.133 { 00:21:12.133 "name": "BaseBdev2", 00:21:12.133 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:12.133 "is_configured": true, 00:21:12.133 "data_offset": 256, 00:21:12.133 "data_size": 7936 00:21:12.133 } 00:21:12.133 ] 00:21:12.133 }' 00:21:12.133 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:12.133 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:12.133 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:12.133 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:12.133 08:53:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:13.066 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:13.066 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.066 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:13.066 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:13.066 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:13.066 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:13.066 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.066 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.066 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.066 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:13.066 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.066 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:13.066 "name": "raid_bdev1", 00:21:13.066 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:13.066 "strip_size_kb": 0, 00:21:13.066 "state": "online", 00:21:13.066 "raid_level": "raid1", 00:21:13.066 "superblock": true, 00:21:13.066 "num_base_bdevs": 2, 00:21:13.066 "num_base_bdevs_discovered": 2, 00:21:13.066 "num_base_bdevs_operational": 2, 00:21:13.066 "process": { 00:21:13.066 "type": "rebuild", 00:21:13.066 "target": "spare", 00:21:13.066 "progress": { 00:21:13.066 "blocks": 5632, 00:21:13.066 "percent": 70 00:21:13.066 } 00:21:13.066 }, 00:21:13.066 "base_bdevs_list": [ 00:21:13.066 { 00:21:13.066 "name": "spare", 00:21:13.066 "uuid": "b1cb2536-3506-5aec-9a3a-3c94a74c4113", 00:21:13.066 "is_configured": true, 00:21:13.066 "data_offset": 256, 00:21:13.066 "data_size": 7936 00:21:13.066 }, 00:21:13.066 { 00:21:13.066 "name": "BaseBdev2", 00:21:13.066 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:13.066 "is_configured": true, 00:21:13.066 "data_offset": 256, 00:21:13.066 "data_size": 7936 00:21:13.066 } 00:21:13.066 ] 00:21:13.066 }' 00:21:13.066 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:13.383 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:13.383 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:13.383 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:13.383 08:53:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:13.949 [2024-11-27 08:53:10.581527] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:13.949 [2024-11-27 08:53:10.581833] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:13.949 [2024-11-27 08:53:10.582035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.208 08:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:14.208 08:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:14.208 08:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.208 08:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:14.208 08:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:14.208 08:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.208 08:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.208 08:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.208 08:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.208 08:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:14.208 08:53:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.466 08:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.466 "name": "raid_bdev1", 00:21:14.466 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:14.466 "strip_size_kb": 0, 00:21:14.466 "state": "online", 00:21:14.466 "raid_level": "raid1", 00:21:14.466 "superblock": true, 00:21:14.466 "num_base_bdevs": 2, 00:21:14.466 "num_base_bdevs_discovered": 2, 00:21:14.466 "num_base_bdevs_operational": 2, 00:21:14.466 "base_bdevs_list": [ 00:21:14.466 { 00:21:14.466 "name": "spare", 00:21:14.466 "uuid": "b1cb2536-3506-5aec-9a3a-3c94a74c4113", 00:21:14.466 "is_configured": true, 00:21:14.466 "data_offset": 256, 00:21:14.466 "data_size": 7936 00:21:14.466 }, 00:21:14.466 { 00:21:14.466 "name": "BaseBdev2", 00:21:14.466 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:14.466 "is_configured": true, 00:21:14.466 "data_offset": 256, 00:21:14.466 "data_size": 7936 00:21:14.466 } 00:21:14.466 ] 00:21:14.466 }' 00:21:14.466 08:53:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.466 "name": "raid_bdev1", 00:21:14.466 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:14.466 "strip_size_kb": 0, 00:21:14.466 "state": "online", 00:21:14.466 "raid_level": "raid1", 00:21:14.466 "superblock": true, 00:21:14.466 "num_base_bdevs": 2, 00:21:14.466 "num_base_bdevs_discovered": 2, 00:21:14.466 "num_base_bdevs_operational": 2, 00:21:14.466 "base_bdevs_list": [ 00:21:14.466 { 00:21:14.466 "name": "spare", 00:21:14.466 "uuid": "b1cb2536-3506-5aec-9a3a-3c94a74c4113", 00:21:14.466 "is_configured": true, 00:21:14.466 "data_offset": 256, 00:21:14.466 "data_size": 7936 00:21:14.466 }, 00:21:14.466 { 00:21:14.466 "name": "BaseBdev2", 00:21:14.466 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:14.466 "is_configured": true, 00:21:14.466 "data_offset": 256, 00:21:14.466 "data_size": 7936 00:21:14.466 } 00:21:14.466 ] 00:21:14.466 }' 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:14.466 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.724 "name": "raid_bdev1", 00:21:14.724 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:14.724 "strip_size_kb": 0, 00:21:14.724 "state": "online", 00:21:14.724 "raid_level": "raid1", 00:21:14.724 "superblock": true, 00:21:14.724 "num_base_bdevs": 2, 00:21:14.724 "num_base_bdevs_discovered": 2, 00:21:14.724 "num_base_bdevs_operational": 2, 00:21:14.724 "base_bdevs_list": [ 00:21:14.724 { 00:21:14.724 "name": "spare", 00:21:14.724 "uuid": "b1cb2536-3506-5aec-9a3a-3c94a74c4113", 00:21:14.724 "is_configured": true, 00:21:14.724 "data_offset": 256, 00:21:14.724 "data_size": 7936 00:21:14.724 }, 00:21:14.724 { 00:21:14.724 "name": "BaseBdev2", 00:21:14.724 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:14.724 "is_configured": true, 00:21:14.724 "data_offset": 256, 00:21:14.724 "data_size": 7936 00:21:14.724 } 00:21:14.724 ] 00:21:14.724 }' 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.724 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:15.290 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:15.291 [2024-11-27 08:53:11.796033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:15.291 [2024-11-27 08:53:11.796252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:15.291 [2024-11-27 08:53:11.796415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:15.291 [2024-11-27 08:53:11.796533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:15.291 [2024-11-27 08:53:11.796555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:15.291 08:53:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:15.549 /dev/nbd0 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local i 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # break 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.549 1+0 records in 00:21:15.549 1+0 records out 00:21:15.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320074 s, 12.8 MB/s 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # size=4096 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # return 0 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:15.549 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:15.807 /dev/nbd1 00:21:15.807 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:15.807 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:15.807 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:21:15.807 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local i 00:21:15.807 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:15.807 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:15.807 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:21:15.807 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # break 00:21:15.807 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:21:15.807 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:21:15.807 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.808 1+0 records in 00:21:15.808 1+0 records out 00:21:15.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587197 s, 7.0 MB/s 00:21:15.808 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.808 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # size=4096 00:21:15.808 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.808 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:21:15.808 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # return 0 00:21:15.808 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:15.808 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:15.808 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:16.066 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:16.066 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:16.066 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:16.066 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:16.066 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:16.066 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.066 08:53:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:16.324 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:16.324 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:16.324 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:16.324 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.324 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.324 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:16.324 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:16.324 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.324 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.324 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.891 [2024-11-27 08:53:13.385019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:16.891 [2024-11-27 08:53:13.385086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.891 [2024-11-27 08:53:13.385121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:16.891 [2024-11-27 08:53:13.385137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.891 [2024-11-27 08:53:13.388292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.891 [2024-11-27 08:53:13.388353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:16.891 [2024-11-27 08:53:13.388476] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:16.891 [2024-11-27 08:53:13.388550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:16.891 [2024-11-27 08:53:13.388747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:16.891 spare 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.891 [2024-11-27 08:53:13.488906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:16.891 [2024-11-27 08:53:13.488996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:16.891 [2024-11-27 08:53:13.489522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:16.891 [2024-11-27 08:53:13.489816] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:16.891 [2024-11-27 08:53:13.489834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:16.891 [2024-11-27 08:53:13.490116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.891 "name": "raid_bdev1", 00:21:16.891 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:16.891 "strip_size_kb": 0, 00:21:16.891 "state": "online", 00:21:16.891 "raid_level": "raid1", 00:21:16.891 "superblock": true, 00:21:16.891 "num_base_bdevs": 2, 00:21:16.891 "num_base_bdevs_discovered": 2, 00:21:16.891 "num_base_bdevs_operational": 2, 00:21:16.891 "base_bdevs_list": [ 00:21:16.891 { 00:21:16.891 "name": "spare", 00:21:16.891 "uuid": "b1cb2536-3506-5aec-9a3a-3c94a74c4113", 00:21:16.891 "is_configured": true, 00:21:16.891 "data_offset": 256, 00:21:16.891 "data_size": 7936 00:21:16.891 }, 00:21:16.891 { 00:21:16.891 "name": "BaseBdev2", 00:21:16.891 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:16.891 "is_configured": true, 00:21:16.891 "data_offset": 256, 00:21:16.891 "data_size": 7936 00:21:16.891 } 00:21:16.891 ] 00:21:16.891 }' 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.891 08:53:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.458 "name": "raid_bdev1", 00:21:17.458 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:17.458 "strip_size_kb": 0, 00:21:17.458 "state": "online", 00:21:17.458 "raid_level": "raid1", 00:21:17.458 "superblock": true, 00:21:17.458 "num_base_bdevs": 2, 00:21:17.458 "num_base_bdevs_discovered": 2, 00:21:17.458 "num_base_bdevs_operational": 2, 00:21:17.458 "base_bdevs_list": [ 00:21:17.458 { 00:21:17.458 "name": "spare", 00:21:17.458 "uuid": "b1cb2536-3506-5aec-9a3a-3c94a74c4113", 00:21:17.458 "is_configured": true, 00:21:17.458 "data_offset": 256, 00:21:17.458 "data_size": 7936 00:21:17.458 }, 00:21:17.458 { 00:21:17.458 "name": "BaseBdev2", 00:21:17.458 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:17.458 "is_configured": true, 00:21:17.458 "data_offset": 256, 00:21:17.458 "data_size": 7936 00:21:17.458 } 00:21:17.458 ] 00:21:17.458 }' 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:17.458 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:17.717 [2024-11-27 08:53:14.262349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.717 "name": "raid_bdev1", 00:21:17.717 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:17.717 "strip_size_kb": 0, 00:21:17.717 "state": "online", 00:21:17.717 "raid_level": "raid1", 00:21:17.717 "superblock": true, 00:21:17.717 "num_base_bdevs": 2, 00:21:17.717 "num_base_bdevs_discovered": 1, 00:21:17.717 "num_base_bdevs_operational": 1, 00:21:17.717 "base_bdevs_list": [ 00:21:17.717 { 00:21:17.717 "name": null, 00:21:17.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.717 "is_configured": false, 00:21:17.717 "data_offset": 0, 00:21:17.717 "data_size": 7936 00:21:17.717 }, 00:21:17.717 { 00:21:17.717 "name": "BaseBdev2", 00:21:17.717 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:17.717 "is_configured": true, 00:21:17.717 "data_offset": 256, 00:21:17.717 "data_size": 7936 00:21:17.717 } 00:21:17.717 ] 00:21:17.717 }' 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.717 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:18.284 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:18.284 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.284 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:18.284 [2024-11-27 08:53:14.790557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:18.284 [2024-11-27 08:53:14.790868] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:18.284 [2024-11-27 08:53:14.790897] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:18.284 [2024-11-27 08:53:14.790948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:18.284 [2024-11-27 08:53:14.807580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:18.284 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.284 08:53:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:18.284 [2024-11-27 08:53:14.810204] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.218 "name": "raid_bdev1", 00:21:19.218 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:19.218 "strip_size_kb": 0, 00:21:19.218 "state": "online", 00:21:19.218 "raid_level": "raid1", 00:21:19.218 "superblock": true, 00:21:19.218 "num_base_bdevs": 2, 00:21:19.218 "num_base_bdevs_discovered": 2, 00:21:19.218 "num_base_bdevs_operational": 2, 00:21:19.218 "process": { 00:21:19.218 "type": "rebuild", 00:21:19.218 "target": "spare", 00:21:19.218 "progress": { 00:21:19.218 "blocks": 2560, 00:21:19.218 "percent": 32 00:21:19.218 } 00:21:19.218 }, 00:21:19.218 "base_bdevs_list": [ 00:21:19.218 { 00:21:19.218 "name": "spare", 00:21:19.218 "uuid": "b1cb2536-3506-5aec-9a3a-3c94a74c4113", 00:21:19.218 "is_configured": true, 00:21:19.218 "data_offset": 256, 00:21:19.218 "data_size": 7936 00:21:19.218 }, 00:21:19.218 { 00:21:19.218 "name": "BaseBdev2", 00:21:19.218 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:19.218 "is_configured": true, 00:21:19.218 "data_offset": 256, 00:21:19.218 "data_size": 7936 00:21:19.218 } 00:21:19.218 ] 00:21:19.218 }' 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.218 08:53:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:19.218 [2024-11-27 08:53:15.971651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:19.477 [2024-11-27 08:53:16.021203] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:19.477 [2024-11-27 08:53:16.021294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.477 [2024-11-27 08:53:16.021319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:19.477 [2024-11-27 08:53:16.021352] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.477 "name": "raid_bdev1", 00:21:19.477 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:19.477 "strip_size_kb": 0, 00:21:19.477 "state": "online", 00:21:19.477 "raid_level": "raid1", 00:21:19.477 "superblock": true, 00:21:19.477 "num_base_bdevs": 2, 00:21:19.477 "num_base_bdevs_discovered": 1, 00:21:19.477 "num_base_bdevs_operational": 1, 00:21:19.477 "base_bdevs_list": [ 00:21:19.477 { 00:21:19.477 "name": null, 00:21:19.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.477 "is_configured": false, 00:21:19.477 "data_offset": 0, 00:21:19.477 "data_size": 7936 00:21:19.477 }, 00:21:19.477 { 00:21:19.477 "name": "BaseBdev2", 00:21:19.477 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:19.477 "is_configured": true, 00:21:19.477 "data_offset": 256, 00:21:19.477 "data_size": 7936 00:21:19.477 } 00:21:19.477 ] 00:21:19.477 }' 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.477 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:20.044 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:20.044 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.044 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:20.044 [2024-11-27 08:53:16.542941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:20.044 [2024-11-27 08:53:16.543039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.044 [2024-11-27 08:53:16.543075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:20.044 [2024-11-27 08:53:16.543094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.044 [2024-11-27 08:53:16.543794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.044 [2024-11-27 08:53:16.543833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:20.044 [2024-11-27 08:53:16.543972] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:20.044 [2024-11-27 08:53:16.544004] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:20.044 [2024-11-27 08:53:16.544020] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:20.044 [2024-11-27 08:53:16.544053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.044 [2024-11-27 08:53:16.560587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:21:20.044 spare 00:21:20.044 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.044 08:53:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:20.044 [2024-11-27 08:53:16.563329] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:20.980 "name": "raid_bdev1", 00:21:20.980 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:20.980 "strip_size_kb": 0, 00:21:20.980 "state": "online", 00:21:20.980 "raid_level": "raid1", 00:21:20.980 "superblock": true, 00:21:20.980 "num_base_bdevs": 2, 00:21:20.980 "num_base_bdevs_discovered": 2, 00:21:20.980 "num_base_bdevs_operational": 2, 00:21:20.980 "process": { 00:21:20.980 "type": "rebuild", 00:21:20.980 "target": "spare", 00:21:20.980 "progress": { 00:21:20.980 "blocks": 2560, 00:21:20.980 "percent": 32 00:21:20.980 } 00:21:20.980 }, 00:21:20.980 "base_bdevs_list": [ 00:21:20.980 { 00:21:20.980 "name": "spare", 00:21:20.980 "uuid": "b1cb2536-3506-5aec-9a3a-3c94a74c4113", 00:21:20.980 "is_configured": true, 00:21:20.980 "data_offset": 256, 00:21:20.980 "data_size": 7936 00:21:20.980 }, 00:21:20.980 { 00:21:20.980 "name": "BaseBdev2", 00:21:20.980 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:20.980 "is_configured": true, 00:21:20.980 "data_offset": 256, 00:21:20.980 "data_size": 7936 00:21:20.980 } 00:21:20.980 ] 00:21:20.980 }' 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.980 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:20.980 [2024-11-27 08:53:17.724960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.239 [2024-11-27 08:53:17.774612] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:21.239 [2024-11-27 08:53:17.774742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.239 [2024-11-27 08:53:17.774771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.239 [2024-11-27 08:53:17.774783] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.239 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.239 "name": "raid_bdev1", 00:21:21.239 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:21.239 "strip_size_kb": 0, 00:21:21.239 "state": "online", 00:21:21.239 "raid_level": "raid1", 00:21:21.239 "superblock": true, 00:21:21.239 "num_base_bdevs": 2, 00:21:21.239 "num_base_bdevs_discovered": 1, 00:21:21.239 "num_base_bdevs_operational": 1, 00:21:21.239 "base_bdevs_list": [ 00:21:21.239 { 00:21:21.239 "name": null, 00:21:21.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.239 "is_configured": false, 00:21:21.239 "data_offset": 0, 00:21:21.239 "data_size": 7936 00:21:21.239 }, 00:21:21.239 { 00:21:21.239 "name": "BaseBdev2", 00:21:21.239 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:21.239 "is_configured": true, 00:21:21.239 "data_offset": 256, 00:21:21.240 "data_size": 7936 00:21:21.240 } 00:21:21.240 ] 00:21:21.240 }' 00:21:21.240 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.240 08:53:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:21.806 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:21.806 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.806 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:21.806 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:21.806 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.806 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.806 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.806 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.806 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:21.806 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.806 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.806 "name": "raid_bdev1", 00:21:21.806 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:21.806 "strip_size_kb": 0, 00:21:21.806 "state": "online", 00:21:21.806 "raid_level": "raid1", 00:21:21.806 "superblock": true, 00:21:21.806 "num_base_bdevs": 2, 00:21:21.806 "num_base_bdevs_discovered": 1, 00:21:21.806 "num_base_bdevs_operational": 1, 00:21:21.806 "base_bdevs_list": [ 00:21:21.806 { 00:21:21.806 "name": null, 00:21:21.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.806 "is_configured": false, 00:21:21.806 "data_offset": 0, 00:21:21.806 "data_size": 7936 00:21:21.806 }, 00:21:21.806 { 00:21:21.806 "name": "BaseBdev2", 00:21:21.806 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:21.806 "is_configured": true, 00:21:21.806 "data_offset": 256, 00:21:21.806 "data_size": 7936 00:21:21.806 } 00:21:21.806 ] 00:21:21.806 }' 00:21:21.807 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.807 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:21.807 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.807 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:21.807 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:21.807 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.807 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:21.807 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.807 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:21.807 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.807 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:21.807 [2024-11-27 08:53:18.472571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:21.807 [2024-11-27 08:53:18.472646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.807 [2024-11-27 08:53:18.472683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:21.807 [2024-11-27 08:53:18.472711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.807 [2024-11-27 08:53:18.473330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.807 [2024-11-27 08:53:18.473373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:21.807 [2024-11-27 08:53:18.473490] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:21.807 [2024-11-27 08:53:18.473515] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:21.807 [2024-11-27 08:53:18.473530] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:21.807 [2024-11-27 08:53:18.473548] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:21.807 BaseBdev1 00:21:21.807 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.807 08:53:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:22.740 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:22.740 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.740 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:22.740 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:22.740 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:22.740 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:22.740 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.740 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.740 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.740 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.740 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.740 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.740 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:22.740 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.999 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.999 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.999 "name": "raid_bdev1", 00:21:22.999 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:22.999 "strip_size_kb": 0, 00:21:22.999 "state": "online", 00:21:22.999 "raid_level": "raid1", 00:21:22.999 "superblock": true, 00:21:22.999 "num_base_bdevs": 2, 00:21:22.999 "num_base_bdevs_discovered": 1, 00:21:22.999 "num_base_bdevs_operational": 1, 00:21:22.999 "base_bdevs_list": [ 00:21:22.999 { 00:21:22.999 "name": null, 00:21:22.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.999 "is_configured": false, 00:21:22.999 "data_offset": 0, 00:21:22.999 "data_size": 7936 00:21:22.999 }, 00:21:22.999 { 00:21:22.999 "name": "BaseBdev2", 00:21:22.999 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:22.999 "is_configured": true, 00:21:22.999 "data_offset": 256, 00:21:22.999 "data_size": 7936 00:21:22.999 } 00:21:22.999 ] 00:21:22.999 }' 00:21:22.999 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.999 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:23.258 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:23.258 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.258 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:23.258 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:23.258 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.258 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.258 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.258 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.258 08:53:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.517 "name": "raid_bdev1", 00:21:23.517 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:23.517 "strip_size_kb": 0, 00:21:23.517 "state": "online", 00:21:23.517 "raid_level": "raid1", 00:21:23.517 "superblock": true, 00:21:23.517 "num_base_bdevs": 2, 00:21:23.517 "num_base_bdevs_discovered": 1, 00:21:23.517 "num_base_bdevs_operational": 1, 00:21:23.517 "base_bdevs_list": [ 00:21:23.517 { 00:21:23.517 "name": null, 00:21:23.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.517 "is_configured": false, 00:21:23.517 "data_offset": 0, 00:21:23.517 "data_size": 7936 00:21:23.517 }, 00:21:23.517 { 00:21:23.517 "name": "BaseBdev2", 00:21:23.517 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:23.517 "is_configured": true, 00:21:23.517 "data_offset": 256, 00:21:23.517 "data_size": 7936 00:21:23.517 } 00:21:23.517 ] 00:21:23.517 }' 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:23.517 [2024-11-27 08:53:20.161122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:23.517 [2024-11-27 08:53:20.161385] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:23.517 [2024-11-27 08:53:20.161412] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:23.517 request: 00:21:23.517 { 00:21:23.517 "base_bdev": "BaseBdev1", 00:21:23.517 "raid_bdev": "raid_bdev1", 00:21:23.517 "method": "bdev_raid_add_base_bdev", 00:21:23.517 "req_id": 1 00:21:23.517 } 00:21:23.517 Got JSON-RPC error response 00:21:23.517 response: 00:21:23.517 { 00:21:23.517 "code": -22, 00:21:23.517 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:23.517 } 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.517 08:53:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:24.453 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:24.453 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.453 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.453 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:24.453 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:24.453 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:24.453 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.453 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.453 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.453 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.453 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.453 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.453 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.453 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:24.453 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.712 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.712 "name": "raid_bdev1", 00:21:24.712 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:24.712 "strip_size_kb": 0, 00:21:24.712 "state": "online", 00:21:24.712 "raid_level": "raid1", 00:21:24.712 "superblock": true, 00:21:24.712 "num_base_bdevs": 2, 00:21:24.712 "num_base_bdevs_discovered": 1, 00:21:24.712 "num_base_bdevs_operational": 1, 00:21:24.712 "base_bdevs_list": [ 00:21:24.712 { 00:21:24.712 "name": null, 00:21:24.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.712 "is_configured": false, 00:21:24.712 "data_offset": 0, 00:21:24.712 "data_size": 7936 00:21:24.712 }, 00:21:24.712 { 00:21:24.712 "name": "BaseBdev2", 00:21:24.712 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:24.712 "is_configured": true, 00:21:24.712 "data_offset": 256, 00:21:24.712 "data_size": 7936 00:21:24.712 } 00:21:24.712 ] 00:21:24.712 }' 00:21:24.712 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.712 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:24.970 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:24.970 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:24.970 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:24.970 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:24.970 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:24.970 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.970 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.970 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.970 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:24.970 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.970 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:24.970 "name": "raid_bdev1", 00:21:24.970 "uuid": "294718c7-dbeb-4714-83aa-ad168de0578b", 00:21:24.970 "strip_size_kb": 0, 00:21:24.970 "state": "online", 00:21:24.970 "raid_level": "raid1", 00:21:24.970 "superblock": true, 00:21:24.970 "num_base_bdevs": 2, 00:21:24.970 "num_base_bdevs_discovered": 1, 00:21:24.970 "num_base_bdevs_operational": 1, 00:21:24.970 "base_bdevs_list": [ 00:21:24.970 { 00:21:24.970 "name": null, 00:21:24.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.970 "is_configured": false, 00:21:24.970 "data_offset": 0, 00:21:24.970 "data_size": 7936 00:21:24.970 }, 00:21:24.970 { 00:21:24.970 "name": "BaseBdev2", 00:21:24.970 "uuid": "fc610b4f-e75a-50a9-b23f-2bb1effdae9a", 00:21:24.970 "is_configured": true, 00:21:24.970 "data_offset": 256, 00:21:24.970 "data_size": 7936 00:21:24.970 } 00:21:24.970 ] 00:21:24.970 }' 00:21:25.229 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:25.229 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:25.229 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:25.229 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:25.229 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87065 00:21:25.229 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@951 -- # '[' -z 87065 ']' 00:21:25.229 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # kill -0 87065 00:21:25.229 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # uname 00:21:25.229 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:21:25.229 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 87065 00:21:25.229 killing process with pid 87065 00:21:25.229 Received shutdown signal, test time was about 60.000000 seconds 00:21:25.229 00:21:25.229 Latency(us) 00:21:25.229 [2024-11-27T08:53:21.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.229 [2024-11-27T08:53:21.989Z] =================================================================================================================== 00:21:25.229 [2024-11-27T08:53:21.989Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:25.229 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:21:25.229 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:21:25.229 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # echo 'killing process with pid 87065' 00:21:25.229 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # kill 87065 00:21:25.229 [2024-11-27 08:53:21.860374] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:25.229 08:53:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@975 -- # wait 87065 00:21:25.229 [2024-11-27 08:53:21.860579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:25.229 [2024-11-27 08:53:21.860663] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:25.229 [2024-11-27 08:53:21.860685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:25.488 [2024-11-27 08:53:22.137620] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:26.863 08:53:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:21:26.863 00:21:26.863 real 0m21.775s 00:21:26.863 user 0m29.244s 00:21:26.863 sys 0m2.640s 00:21:26.863 08:53:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # xtrace_disable 00:21:26.863 ************************************ 00:21:26.863 END TEST raid_rebuild_test_sb_4k 00:21:26.863 ************************************ 00:21:26.863 08:53:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:26.863 08:53:23 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:21:26.863 08:53:23 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:21:26.863 08:53:23 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:21:26.863 08:53:23 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:21:26.863 08:53:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:26.863 ************************************ 00:21:26.863 START TEST raid_state_function_test_sb_md_separate 00:21:26.863 ************************************ 00:21:26.863 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # raid_state_function_test raid1 2 true 00:21:26.863 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:26.863 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:26.863 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:26.863 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:26.863 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:26.863 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:26.863 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:26.863 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:26.863 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:26.863 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:26.863 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:26.863 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:26.864 Process raid pid: 87767 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87767 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87767' 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87767 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@832 -- # '[' -z 87767 ']' 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local max_retries=100 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@841 -- # xtrace_disable 00:21:26.864 08:53:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:26.864 [2024-11-27 08:53:23.421633] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:21:26.864 [2024-11-27 08:53:23.422091] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.864 [2024-11-27 08:53:23.610118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.122 [2024-11-27 08:53:23.758324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.381 [2024-11-27 08:53:23.986483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.381 [2024-11-27 08:53:23.986550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.949 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:21:27.949 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@865 -- # return 0 00:21:27.949 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.950 [2024-11-27 08:53:24.417127] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:27.950 [2024-11-27 08:53:24.417213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:27.950 [2024-11-27 08:53:24.417230] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:27.950 [2024-11-27 08:53:24.417247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.950 "name": "Existed_Raid", 00:21:27.950 "uuid": "45e7310e-5025-4a3a-a824-7e813abff4c8", 00:21:27.950 "strip_size_kb": 0, 00:21:27.950 "state": "configuring", 00:21:27.950 "raid_level": "raid1", 00:21:27.950 "superblock": true, 00:21:27.950 "num_base_bdevs": 2, 00:21:27.950 "num_base_bdevs_discovered": 0, 00:21:27.950 "num_base_bdevs_operational": 2, 00:21:27.950 "base_bdevs_list": [ 00:21:27.950 { 00:21:27.950 "name": "BaseBdev1", 00:21:27.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.950 "is_configured": false, 00:21:27.950 "data_offset": 0, 00:21:27.950 "data_size": 0 00:21:27.950 }, 00:21:27.950 { 00:21:27.950 "name": "BaseBdev2", 00:21:27.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.950 "is_configured": false, 00:21:27.950 "data_offset": 0, 00:21:27.950 "data_size": 0 00:21:27.950 } 00:21:27.950 ] 00:21:27.950 }' 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.950 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.208 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:28.209 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.209 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.209 [2024-11-27 08:53:24.949381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:28.209 [2024-11-27 08:53:24.949459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:28.209 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.209 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:28.209 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.209 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.209 [2024-11-27 08:53:24.957241] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:28.209 [2024-11-27 08:53:24.957343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:28.209 [2024-11-27 08:53:24.957372] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:28.209 [2024-11-27 08:53:24.957395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:28.209 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.209 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:21:28.209 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.209 08:53:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.468 [2024-11-27 08:53:25.009635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:28.468 BaseBdev1 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local i 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.468 [ 00:21:28.468 { 00:21:28.468 "name": "BaseBdev1", 00:21:28.468 "aliases": [ 00:21:28.468 "68729124-b400-4ec3-a967-1225aef6e2d0" 00:21:28.468 ], 00:21:28.468 "product_name": "Malloc disk", 00:21:28.468 "block_size": 4096, 00:21:28.468 "num_blocks": 8192, 00:21:28.468 "uuid": "68729124-b400-4ec3-a967-1225aef6e2d0", 00:21:28.468 "md_size": 32, 00:21:28.468 "md_interleave": false, 00:21:28.468 "dif_type": 0, 00:21:28.468 "assigned_rate_limits": { 00:21:28.468 "rw_ios_per_sec": 0, 00:21:28.468 "rw_mbytes_per_sec": 0, 00:21:28.468 "r_mbytes_per_sec": 0, 00:21:28.468 "w_mbytes_per_sec": 0 00:21:28.468 }, 00:21:28.468 "claimed": true, 00:21:28.468 "claim_type": "exclusive_write", 00:21:28.468 "zoned": false, 00:21:28.468 "supported_io_types": { 00:21:28.468 "read": true, 00:21:28.468 "write": true, 00:21:28.468 "unmap": true, 00:21:28.468 "flush": true, 00:21:28.468 "reset": true, 00:21:28.468 "nvme_admin": false, 00:21:28.468 "nvme_io": false, 00:21:28.468 "nvme_io_md": false, 00:21:28.468 "write_zeroes": true, 00:21:28.468 "zcopy": true, 00:21:28.468 "get_zone_info": false, 00:21:28.468 "zone_management": false, 00:21:28.468 "zone_append": false, 00:21:28.468 "compare": false, 00:21:28.468 "compare_and_write": false, 00:21:28.468 "abort": true, 00:21:28.468 "seek_hole": false, 00:21:28.468 "seek_data": false, 00:21:28.468 "copy": true, 00:21:28.468 "nvme_iov_md": false 00:21:28.468 }, 00:21:28.468 "memory_domains": [ 00:21:28.468 { 00:21:28.468 "dma_device_id": "system", 00:21:28.468 "dma_device_type": 1 00:21:28.468 }, 00:21:28.468 { 00:21:28.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.468 "dma_device_type": 2 00:21:28.468 } 00:21:28.468 ], 00:21:28.468 "driver_specific": {} 00:21:28.468 } 00:21:28.468 ] 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # return 0 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:28.468 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:28.469 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:28.469 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:28.469 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:28.469 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.469 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.469 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.469 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.469 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.469 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.469 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.469 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.469 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.469 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.469 "name": "Existed_Raid", 00:21:28.469 "uuid": "ca8dfb29-2110-409f-bf4f-c4380abc101c", 00:21:28.469 "strip_size_kb": 0, 00:21:28.469 "state": "configuring", 00:21:28.469 "raid_level": "raid1", 00:21:28.469 "superblock": true, 00:21:28.469 "num_base_bdevs": 2, 00:21:28.469 "num_base_bdevs_discovered": 1, 00:21:28.469 "num_base_bdevs_operational": 2, 00:21:28.469 "base_bdevs_list": [ 00:21:28.469 { 00:21:28.469 "name": "BaseBdev1", 00:21:28.469 "uuid": "68729124-b400-4ec3-a967-1225aef6e2d0", 00:21:28.469 "is_configured": true, 00:21:28.469 "data_offset": 256, 00:21:28.469 "data_size": 7936 00:21:28.469 }, 00:21:28.469 { 00:21:28.469 "name": "BaseBdev2", 00:21:28.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.469 "is_configured": false, 00:21:28.469 "data_offset": 0, 00:21:28.469 "data_size": 0 00:21:28.469 } 00:21:28.469 ] 00:21:28.469 }' 00:21:28.469 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.469 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.036 [2024-11-27 08:53:25.597926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:29.036 [2024-11-27 08:53:25.598029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.036 [2024-11-27 08:53:25.609913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:29.036 [2024-11-27 08:53:25.612710] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:29.036 [2024-11-27 08:53:25.612766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.036 "name": "Existed_Raid", 00:21:29.036 "uuid": "e11d10f1-9042-40c8-ab0f-d2e78fca3e1b", 00:21:29.036 "strip_size_kb": 0, 00:21:29.036 "state": "configuring", 00:21:29.036 "raid_level": "raid1", 00:21:29.036 "superblock": true, 00:21:29.036 "num_base_bdevs": 2, 00:21:29.036 "num_base_bdevs_discovered": 1, 00:21:29.036 "num_base_bdevs_operational": 2, 00:21:29.036 "base_bdevs_list": [ 00:21:29.036 { 00:21:29.036 "name": "BaseBdev1", 00:21:29.036 "uuid": "68729124-b400-4ec3-a967-1225aef6e2d0", 00:21:29.036 "is_configured": true, 00:21:29.036 "data_offset": 256, 00:21:29.036 "data_size": 7936 00:21:29.036 }, 00:21:29.036 { 00:21:29.036 "name": "BaseBdev2", 00:21:29.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.036 "is_configured": false, 00:21:29.036 "data_offset": 0, 00:21:29.036 "data_size": 0 00:21:29.036 } 00:21:29.036 ] 00:21:29.036 }' 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.036 08:53:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.602 [2024-11-27 08:53:26.161373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:29.602 [2024-11-27 08:53:26.161726] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:29.602 [2024-11-27 08:53:26.161750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:29.602 [2024-11-27 08:53:26.161918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:29.602 [2024-11-27 08:53:26.162119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:29.602 [2024-11-27 08:53:26.162149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:29.602 BaseBdev2 00:21:29.602 [2024-11-27 08:53:26.162298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local i 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.602 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.602 [ 00:21:29.602 { 00:21:29.602 "name": "BaseBdev2", 00:21:29.602 "aliases": [ 00:21:29.602 "29e16bee-2c1d-4fec-8403-9844b3b2028d" 00:21:29.602 ], 00:21:29.602 "product_name": "Malloc disk", 00:21:29.602 "block_size": 4096, 00:21:29.602 "num_blocks": 8192, 00:21:29.602 "uuid": "29e16bee-2c1d-4fec-8403-9844b3b2028d", 00:21:29.602 "md_size": 32, 00:21:29.602 "md_interleave": false, 00:21:29.602 "dif_type": 0, 00:21:29.602 "assigned_rate_limits": { 00:21:29.602 "rw_ios_per_sec": 0, 00:21:29.602 "rw_mbytes_per_sec": 0, 00:21:29.602 "r_mbytes_per_sec": 0, 00:21:29.602 "w_mbytes_per_sec": 0 00:21:29.602 }, 00:21:29.602 "claimed": true, 00:21:29.602 "claim_type": "exclusive_write", 00:21:29.602 "zoned": false, 00:21:29.602 "supported_io_types": { 00:21:29.602 "read": true, 00:21:29.602 "write": true, 00:21:29.602 "unmap": true, 00:21:29.602 "flush": true, 00:21:29.602 "reset": true, 00:21:29.602 "nvme_admin": false, 00:21:29.602 "nvme_io": false, 00:21:29.602 "nvme_io_md": false, 00:21:29.602 "write_zeroes": true, 00:21:29.602 "zcopy": true, 00:21:29.602 "get_zone_info": false, 00:21:29.602 "zone_management": false, 00:21:29.602 "zone_append": false, 00:21:29.602 "compare": false, 00:21:29.602 "compare_and_write": false, 00:21:29.602 "abort": true, 00:21:29.602 "seek_hole": false, 00:21:29.602 "seek_data": false, 00:21:29.602 "copy": true, 00:21:29.602 "nvme_iov_md": false 00:21:29.602 }, 00:21:29.602 "memory_domains": [ 00:21:29.602 { 00:21:29.602 "dma_device_id": "system", 00:21:29.602 "dma_device_type": 1 00:21:29.602 }, 00:21:29.602 { 00:21:29.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.603 "dma_device_type": 2 00:21:29.603 } 00:21:29.603 ], 00:21:29.603 "driver_specific": {} 00:21:29.603 } 00:21:29.603 ] 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # return 0 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.603 "name": "Existed_Raid", 00:21:29.603 "uuid": "e11d10f1-9042-40c8-ab0f-d2e78fca3e1b", 00:21:29.603 "strip_size_kb": 0, 00:21:29.603 "state": "online", 00:21:29.603 "raid_level": "raid1", 00:21:29.603 "superblock": true, 00:21:29.603 "num_base_bdevs": 2, 00:21:29.603 "num_base_bdevs_discovered": 2, 00:21:29.603 "num_base_bdevs_operational": 2, 00:21:29.603 "base_bdevs_list": [ 00:21:29.603 { 00:21:29.603 "name": "BaseBdev1", 00:21:29.603 "uuid": "68729124-b400-4ec3-a967-1225aef6e2d0", 00:21:29.603 "is_configured": true, 00:21:29.603 "data_offset": 256, 00:21:29.603 "data_size": 7936 00:21:29.603 }, 00:21:29.603 { 00:21:29.603 "name": "BaseBdev2", 00:21:29.603 "uuid": "29e16bee-2c1d-4fec-8403-9844b3b2028d", 00:21:29.603 "is_configured": true, 00:21:29.603 "data_offset": 256, 00:21:29.603 "data_size": 7936 00:21:29.603 } 00:21:29.603 ] 00:21:29.603 }' 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.603 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.194 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:30.194 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:30.194 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:30.194 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:30.194 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:30.194 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:30.194 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:30.194 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.194 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.194 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:30.194 [2024-11-27 08:53:26.726006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:30.194 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.194 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:30.194 "name": "Existed_Raid", 00:21:30.194 "aliases": [ 00:21:30.194 "e11d10f1-9042-40c8-ab0f-d2e78fca3e1b" 00:21:30.194 ], 00:21:30.194 "product_name": "Raid Volume", 00:21:30.194 "block_size": 4096, 00:21:30.194 "num_blocks": 7936, 00:21:30.194 "uuid": "e11d10f1-9042-40c8-ab0f-d2e78fca3e1b", 00:21:30.194 "md_size": 32, 00:21:30.194 "md_interleave": false, 00:21:30.194 "dif_type": 0, 00:21:30.194 "assigned_rate_limits": { 00:21:30.194 "rw_ios_per_sec": 0, 00:21:30.194 "rw_mbytes_per_sec": 0, 00:21:30.194 "r_mbytes_per_sec": 0, 00:21:30.194 "w_mbytes_per_sec": 0 00:21:30.194 }, 00:21:30.194 "claimed": false, 00:21:30.194 "zoned": false, 00:21:30.194 "supported_io_types": { 00:21:30.194 "read": true, 00:21:30.194 "write": true, 00:21:30.194 "unmap": false, 00:21:30.194 "flush": false, 00:21:30.194 "reset": true, 00:21:30.194 "nvme_admin": false, 00:21:30.194 "nvme_io": false, 00:21:30.194 "nvme_io_md": false, 00:21:30.194 "write_zeroes": true, 00:21:30.194 "zcopy": false, 00:21:30.194 "get_zone_info": false, 00:21:30.194 "zone_management": false, 00:21:30.194 "zone_append": false, 00:21:30.195 "compare": false, 00:21:30.195 "compare_and_write": false, 00:21:30.195 "abort": false, 00:21:30.195 "seek_hole": false, 00:21:30.195 "seek_data": false, 00:21:30.195 "copy": false, 00:21:30.195 "nvme_iov_md": false 00:21:30.195 }, 00:21:30.195 "memory_domains": [ 00:21:30.195 { 00:21:30.195 "dma_device_id": "system", 00:21:30.195 "dma_device_type": 1 00:21:30.195 }, 00:21:30.195 { 00:21:30.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.195 "dma_device_type": 2 00:21:30.195 }, 00:21:30.195 { 00:21:30.195 "dma_device_id": "system", 00:21:30.195 "dma_device_type": 1 00:21:30.195 }, 00:21:30.195 { 00:21:30.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.195 "dma_device_type": 2 00:21:30.195 } 00:21:30.195 ], 00:21:30.195 "driver_specific": { 00:21:30.195 "raid": { 00:21:30.195 "uuid": "e11d10f1-9042-40c8-ab0f-d2e78fca3e1b", 00:21:30.195 "strip_size_kb": 0, 00:21:30.195 "state": "online", 00:21:30.195 "raid_level": "raid1", 00:21:30.195 "superblock": true, 00:21:30.195 "num_base_bdevs": 2, 00:21:30.195 "num_base_bdevs_discovered": 2, 00:21:30.195 "num_base_bdevs_operational": 2, 00:21:30.195 "base_bdevs_list": [ 00:21:30.195 { 00:21:30.195 "name": "BaseBdev1", 00:21:30.195 "uuid": "68729124-b400-4ec3-a967-1225aef6e2d0", 00:21:30.195 "is_configured": true, 00:21:30.195 "data_offset": 256, 00:21:30.195 "data_size": 7936 00:21:30.195 }, 00:21:30.195 { 00:21:30.195 "name": "BaseBdev2", 00:21:30.195 "uuid": "29e16bee-2c1d-4fec-8403-9844b3b2028d", 00:21:30.195 "is_configured": true, 00:21:30.195 "data_offset": 256, 00:21:30.195 "data_size": 7936 00:21:30.195 } 00:21:30.195 ] 00:21:30.195 } 00:21:30.195 } 00:21:30.195 }' 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:30.195 BaseBdev2' 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.195 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.454 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:30.454 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:30.454 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:30.454 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.454 08:53:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.454 [2024-11-27 08:53:26.973736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.454 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.454 "name": "Existed_Raid", 00:21:30.454 "uuid": "e11d10f1-9042-40c8-ab0f-d2e78fca3e1b", 00:21:30.454 "strip_size_kb": 0, 00:21:30.454 "state": "online", 00:21:30.454 "raid_level": "raid1", 00:21:30.454 "superblock": true, 00:21:30.454 "num_base_bdevs": 2, 00:21:30.454 "num_base_bdevs_discovered": 1, 00:21:30.454 "num_base_bdevs_operational": 1, 00:21:30.454 "base_bdevs_list": [ 00:21:30.454 { 00:21:30.454 "name": null, 00:21:30.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.454 "is_configured": false, 00:21:30.454 "data_offset": 0, 00:21:30.454 "data_size": 7936 00:21:30.454 }, 00:21:30.454 { 00:21:30.454 "name": "BaseBdev2", 00:21:30.454 "uuid": "29e16bee-2c1d-4fec-8403-9844b3b2028d", 00:21:30.455 "is_configured": true, 00:21:30.455 "data_offset": 256, 00:21:30.455 "data_size": 7936 00:21:30.455 } 00:21:30.455 ] 00:21:30.455 }' 00:21:30.455 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.455 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.021 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:31.021 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:31.021 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:31.021 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.021 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.021 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.021 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.021 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:31.021 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:31.021 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:31.021 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.021 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.021 [2024-11-27 08:53:27.684415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:31.021 [2024-11-27 08:53:27.684565] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:31.280 [2024-11-27 08:53:27.782658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:31.280 [2024-11-27 08:53:27.782739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:31.280 [2024-11-27 08:53:27.782760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87767 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' -z 87767 ']' 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # kill -0 87767 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # uname 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 87767 00:21:31.280 killing process with pid 87767 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # echo 'killing process with pid 87767' 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # kill 87767 00:21:31.280 [2024-11-27 08:53:27.876041] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:31.280 08:53:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@975 -- # wait 87767 00:21:31.280 [2024-11-27 08:53:27.891432] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:32.654 08:53:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:21:32.654 00:21:32.654 real 0m5.716s 00:21:32.654 user 0m8.540s 00:21:32.654 sys 0m0.865s 00:21:32.654 08:53:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # xtrace_disable 00:21:32.654 08:53:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.654 ************************************ 00:21:32.654 END TEST raid_state_function_test_sb_md_separate 00:21:32.654 ************************************ 00:21:32.654 08:53:29 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:21:32.654 08:53:29 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:21:32.654 08:53:29 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:21:32.654 08:53:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:32.654 ************************************ 00:21:32.654 START TEST raid_superblock_test_md_separate 00:21:32.654 ************************************ 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # raid_superblock_test raid1 2 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88024 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88024 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@832 -- # '[' -z 88024 ']' 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local max_retries=100 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.654 08:53:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@841 -- # xtrace_disable 00:21:32.655 08:53:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.655 [2024-11-27 08:53:29.170326] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:21:32.655 [2024-11-27 08:53:29.170740] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88024 ] 00:21:32.655 [2024-11-27 08:53:29.344454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.912 [2024-11-27 08:53:29.494967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.171 [2024-11-27 08:53:29.717614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:33.171 [2024-11-27 08:53:29.717704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@865 -- # return 0 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.801 malloc1 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.801 [2024-11-27 08:53:30.253931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:33.801 [2024-11-27 08:53:30.254025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.801 [2024-11-27 08:53:30.254064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:33.801 [2024-11-27 08:53:30.254082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.801 [2024-11-27 08:53:30.256784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.801 [2024-11-27 08:53:30.256831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:33.801 pt1 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.801 malloc2 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.801 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.801 [2024-11-27 08:53:30.315761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:33.801 [2024-11-27 08:53:30.315985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.801 [2024-11-27 08:53:30.316077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:33.802 [2024-11-27 08:53:30.316248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.802 [2024-11-27 08:53:30.319050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.802 [2024-11-27 08:53:30.319251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:33.802 pt2 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.802 [2024-11-27 08:53:30.327996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:33.802 [2024-11-27 08:53:30.330654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:33.802 [2024-11-27 08:53:30.330930] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:33.802 [2024-11-27 08:53:30.330953] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:33.802 [2024-11-27 08:53:30.331060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:33.802 [2024-11-27 08:53:30.331236] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:33.802 [2024-11-27 08:53:30.331258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:33.802 [2024-11-27 08:53:30.331426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.802 "name": "raid_bdev1", 00:21:33.802 "uuid": "21713a9e-9944-4be9-a437-966ae931900c", 00:21:33.802 "strip_size_kb": 0, 00:21:33.802 "state": "online", 00:21:33.802 "raid_level": "raid1", 00:21:33.802 "superblock": true, 00:21:33.802 "num_base_bdevs": 2, 00:21:33.802 "num_base_bdevs_discovered": 2, 00:21:33.802 "num_base_bdevs_operational": 2, 00:21:33.802 "base_bdevs_list": [ 00:21:33.802 { 00:21:33.802 "name": "pt1", 00:21:33.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:33.802 "is_configured": true, 00:21:33.802 "data_offset": 256, 00:21:33.802 "data_size": 7936 00:21:33.802 }, 00:21:33.802 { 00:21:33.802 "name": "pt2", 00:21:33.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:33.802 "is_configured": true, 00:21:33.802 "data_offset": 256, 00:21:33.802 "data_size": 7936 00:21:33.802 } 00:21:33.802 ] 00:21:33.802 }' 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.802 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.076 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:34.076 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:34.076 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:34.076 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:34.076 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:34.077 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:34.077 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:34.077 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.077 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.077 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:34.077 [2024-11-27 08:53:30.816515] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:34.077 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.336 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:34.336 "name": "raid_bdev1", 00:21:34.336 "aliases": [ 00:21:34.336 "21713a9e-9944-4be9-a437-966ae931900c" 00:21:34.336 ], 00:21:34.336 "product_name": "Raid Volume", 00:21:34.336 "block_size": 4096, 00:21:34.336 "num_blocks": 7936, 00:21:34.336 "uuid": "21713a9e-9944-4be9-a437-966ae931900c", 00:21:34.336 "md_size": 32, 00:21:34.336 "md_interleave": false, 00:21:34.336 "dif_type": 0, 00:21:34.336 "assigned_rate_limits": { 00:21:34.336 "rw_ios_per_sec": 0, 00:21:34.336 "rw_mbytes_per_sec": 0, 00:21:34.336 "r_mbytes_per_sec": 0, 00:21:34.336 "w_mbytes_per_sec": 0 00:21:34.336 }, 00:21:34.336 "claimed": false, 00:21:34.336 "zoned": false, 00:21:34.336 "supported_io_types": { 00:21:34.336 "read": true, 00:21:34.336 "write": true, 00:21:34.336 "unmap": false, 00:21:34.336 "flush": false, 00:21:34.336 "reset": true, 00:21:34.336 "nvme_admin": false, 00:21:34.336 "nvme_io": false, 00:21:34.336 "nvme_io_md": false, 00:21:34.336 "write_zeroes": true, 00:21:34.336 "zcopy": false, 00:21:34.336 "get_zone_info": false, 00:21:34.336 "zone_management": false, 00:21:34.336 "zone_append": false, 00:21:34.336 "compare": false, 00:21:34.336 "compare_and_write": false, 00:21:34.336 "abort": false, 00:21:34.336 "seek_hole": false, 00:21:34.336 "seek_data": false, 00:21:34.336 "copy": false, 00:21:34.336 "nvme_iov_md": false 00:21:34.336 }, 00:21:34.336 "memory_domains": [ 00:21:34.336 { 00:21:34.336 "dma_device_id": "system", 00:21:34.336 "dma_device_type": 1 00:21:34.336 }, 00:21:34.336 { 00:21:34.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.336 "dma_device_type": 2 00:21:34.336 }, 00:21:34.336 { 00:21:34.336 "dma_device_id": "system", 00:21:34.336 "dma_device_type": 1 00:21:34.336 }, 00:21:34.336 { 00:21:34.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.336 "dma_device_type": 2 00:21:34.336 } 00:21:34.336 ], 00:21:34.336 "driver_specific": { 00:21:34.336 "raid": { 00:21:34.336 "uuid": "21713a9e-9944-4be9-a437-966ae931900c", 00:21:34.336 "strip_size_kb": 0, 00:21:34.336 "state": "online", 00:21:34.336 "raid_level": "raid1", 00:21:34.336 "superblock": true, 00:21:34.336 "num_base_bdevs": 2, 00:21:34.336 "num_base_bdevs_discovered": 2, 00:21:34.336 "num_base_bdevs_operational": 2, 00:21:34.336 "base_bdevs_list": [ 00:21:34.336 { 00:21:34.336 "name": "pt1", 00:21:34.336 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:34.336 "is_configured": true, 00:21:34.336 "data_offset": 256, 00:21:34.336 "data_size": 7936 00:21:34.336 }, 00:21:34.336 { 00:21:34.336 "name": "pt2", 00:21:34.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:34.336 "is_configured": true, 00:21:34.336 "data_offset": 256, 00:21:34.336 "data_size": 7936 00:21:34.336 } 00:21:34.336 ] 00:21:34.336 } 00:21:34.336 } 00:21:34.336 }' 00:21:34.336 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:34.336 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:34.336 pt2' 00:21:34.337 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:34.337 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:34.337 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:34.337 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:34.337 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.337 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.337 08:53:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:34.337 08:53:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.337 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:34.337 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:34.337 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:34.337 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:34.337 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.337 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.337 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:34.337 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.337 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:34.337 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:34.337 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:34.337 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:34.337 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.337 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.337 [2024-11-27 08:53:31.072512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:34.337 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=21713a9e-9944-4be9-a437-966ae931900c 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 21713a9e-9944-4be9-a437-966ae931900c ']' 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.596 [2024-11-27 08:53:31.124210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:34.596 [2024-11-27 08:53:31.124416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:34.596 [2024-11-27 08:53:31.124684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:34.596 [2024-11-27 08:53:31.124891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:34.596 [2024-11-27 08:53:31.125081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.596 [2024-11-27 08:53:31.280227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:34.596 [2024-11-27 08:53:31.283130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:34.596 [2024-11-27 08:53:31.283256] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:34.596 [2024-11-27 08:53:31.283392] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:34.596 [2024-11-27 08:53:31.283421] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:34.596 [2024-11-27 08:53:31.283439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:34.596 request: 00:21:34.596 { 00:21:34.596 "name": "raid_bdev1", 00:21:34.596 "raid_level": "raid1", 00:21:34.596 "base_bdevs": [ 00:21:34.596 "malloc1", 00:21:34.596 "malloc2" 00:21:34.596 ], 00:21:34.596 "superblock": false, 00:21:34.596 "method": "bdev_raid_create", 00:21:34.596 "req_id": 1 00:21:34.596 } 00:21:34.596 Got JSON-RPC error response 00:21:34.596 response: 00:21:34.596 { 00:21:34.596 "code": -17, 00:21:34.596 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:34.596 } 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.596 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.597 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.597 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:34.597 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:34.597 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:34.597 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.597 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.597 [2024-11-27 08:53:31.352286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:34.597 [2024-11-27 08:53:31.352377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.597 [2024-11-27 08:53:31.352410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:34.597 [2024-11-27 08:53:31.352430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.855 [2024-11-27 08:53:31.355249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.855 [2024-11-27 08:53:31.355468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:34.855 [2024-11-27 08:53:31.355558] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:34.855 [2024-11-27 08:53:31.355638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:34.855 pt1 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.855 "name": "raid_bdev1", 00:21:34.855 "uuid": "21713a9e-9944-4be9-a437-966ae931900c", 00:21:34.855 "strip_size_kb": 0, 00:21:34.855 "state": "configuring", 00:21:34.855 "raid_level": "raid1", 00:21:34.855 "superblock": true, 00:21:34.855 "num_base_bdevs": 2, 00:21:34.855 "num_base_bdevs_discovered": 1, 00:21:34.855 "num_base_bdevs_operational": 2, 00:21:34.855 "base_bdevs_list": [ 00:21:34.855 { 00:21:34.855 "name": "pt1", 00:21:34.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:34.855 "is_configured": true, 00:21:34.855 "data_offset": 256, 00:21:34.855 "data_size": 7936 00:21:34.855 }, 00:21:34.855 { 00:21:34.855 "name": null, 00:21:34.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:34.855 "is_configured": false, 00:21:34.855 "data_offset": 256, 00:21:34.855 "data_size": 7936 00:21:34.855 } 00:21:34.855 ] 00:21:34.855 }' 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.855 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.114 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:35.114 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:35.114 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:35.114 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:35.114 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.114 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.114 [2024-11-27 08:53:31.868425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:35.114 [2024-11-27 08:53:31.868682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.114 [2024-11-27 08:53:31.868729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:35.114 [2024-11-27 08:53:31.868750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.114 [2024-11-27 08:53:31.869122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.114 [2024-11-27 08:53:31.869155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:35.114 [2024-11-27 08:53:31.869234] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:35.114 [2024-11-27 08:53:31.869273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:35.114 [2024-11-27 08:53:31.869459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:35.114 [2024-11-27 08:53:31.869483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:35.114 [2024-11-27 08:53:31.869576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:35.114 [2024-11-27 08:53:31.869728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:35.114 [2024-11-27 08:53:31.869744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:35.114 [2024-11-27 08:53:31.869884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.373 pt2 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.373 "name": "raid_bdev1", 00:21:35.373 "uuid": "21713a9e-9944-4be9-a437-966ae931900c", 00:21:35.373 "strip_size_kb": 0, 00:21:35.373 "state": "online", 00:21:35.373 "raid_level": "raid1", 00:21:35.373 "superblock": true, 00:21:35.373 "num_base_bdevs": 2, 00:21:35.373 "num_base_bdevs_discovered": 2, 00:21:35.373 "num_base_bdevs_operational": 2, 00:21:35.373 "base_bdevs_list": [ 00:21:35.373 { 00:21:35.373 "name": "pt1", 00:21:35.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:35.373 "is_configured": true, 00:21:35.373 "data_offset": 256, 00:21:35.373 "data_size": 7936 00:21:35.373 }, 00:21:35.373 { 00:21:35.373 "name": "pt2", 00:21:35.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:35.373 "is_configured": true, 00:21:35.373 "data_offset": 256, 00:21:35.373 "data_size": 7936 00:21:35.373 } 00:21:35.373 ] 00:21:35.373 }' 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.373 08:53:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.940 [2024-11-27 08:53:32.400969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:35.940 "name": "raid_bdev1", 00:21:35.940 "aliases": [ 00:21:35.940 "21713a9e-9944-4be9-a437-966ae931900c" 00:21:35.940 ], 00:21:35.940 "product_name": "Raid Volume", 00:21:35.940 "block_size": 4096, 00:21:35.940 "num_blocks": 7936, 00:21:35.940 "uuid": "21713a9e-9944-4be9-a437-966ae931900c", 00:21:35.940 "md_size": 32, 00:21:35.940 "md_interleave": false, 00:21:35.940 "dif_type": 0, 00:21:35.940 "assigned_rate_limits": { 00:21:35.940 "rw_ios_per_sec": 0, 00:21:35.940 "rw_mbytes_per_sec": 0, 00:21:35.940 "r_mbytes_per_sec": 0, 00:21:35.940 "w_mbytes_per_sec": 0 00:21:35.940 }, 00:21:35.940 "claimed": false, 00:21:35.940 "zoned": false, 00:21:35.940 "supported_io_types": { 00:21:35.940 "read": true, 00:21:35.940 "write": true, 00:21:35.940 "unmap": false, 00:21:35.940 "flush": false, 00:21:35.940 "reset": true, 00:21:35.940 "nvme_admin": false, 00:21:35.940 "nvme_io": false, 00:21:35.940 "nvme_io_md": false, 00:21:35.940 "write_zeroes": true, 00:21:35.940 "zcopy": false, 00:21:35.940 "get_zone_info": false, 00:21:35.940 "zone_management": false, 00:21:35.940 "zone_append": false, 00:21:35.940 "compare": false, 00:21:35.940 "compare_and_write": false, 00:21:35.940 "abort": false, 00:21:35.940 "seek_hole": false, 00:21:35.940 "seek_data": false, 00:21:35.940 "copy": false, 00:21:35.940 "nvme_iov_md": false 00:21:35.940 }, 00:21:35.940 "memory_domains": [ 00:21:35.940 { 00:21:35.940 "dma_device_id": "system", 00:21:35.940 "dma_device_type": 1 00:21:35.940 }, 00:21:35.940 { 00:21:35.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.940 "dma_device_type": 2 00:21:35.940 }, 00:21:35.940 { 00:21:35.940 "dma_device_id": "system", 00:21:35.940 "dma_device_type": 1 00:21:35.940 }, 00:21:35.940 { 00:21:35.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.940 "dma_device_type": 2 00:21:35.940 } 00:21:35.940 ], 00:21:35.940 "driver_specific": { 00:21:35.940 "raid": { 00:21:35.940 "uuid": "21713a9e-9944-4be9-a437-966ae931900c", 00:21:35.940 "strip_size_kb": 0, 00:21:35.940 "state": "online", 00:21:35.940 "raid_level": "raid1", 00:21:35.940 "superblock": true, 00:21:35.940 "num_base_bdevs": 2, 00:21:35.940 "num_base_bdevs_discovered": 2, 00:21:35.940 "num_base_bdevs_operational": 2, 00:21:35.940 "base_bdevs_list": [ 00:21:35.940 { 00:21:35.940 "name": "pt1", 00:21:35.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:35.940 "is_configured": true, 00:21:35.940 "data_offset": 256, 00:21:35.940 "data_size": 7936 00:21:35.940 }, 00:21:35.940 { 00:21:35.940 "name": "pt2", 00:21:35.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:35.940 "is_configured": true, 00:21:35.940 "data_offset": 256, 00:21:35.940 "data_size": 7936 00:21:35.940 } 00:21:35.940 ] 00:21:35.940 } 00:21:35.940 } 00:21:35.940 }' 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:35.940 pt2' 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.940 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.941 [2024-11-27 08:53:32.648954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.941 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 21713a9e-9944-4be9-a437-966ae931900c '!=' 21713a9e-9944-4be9-a437-966ae931900c ']' 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.200 [2024-11-27 08:53:32.704753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.200 "name": "raid_bdev1", 00:21:36.200 "uuid": "21713a9e-9944-4be9-a437-966ae931900c", 00:21:36.200 "strip_size_kb": 0, 00:21:36.200 "state": "online", 00:21:36.200 "raid_level": "raid1", 00:21:36.200 "superblock": true, 00:21:36.200 "num_base_bdevs": 2, 00:21:36.200 "num_base_bdevs_discovered": 1, 00:21:36.200 "num_base_bdevs_operational": 1, 00:21:36.200 "base_bdevs_list": [ 00:21:36.200 { 00:21:36.200 "name": null, 00:21:36.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.200 "is_configured": false, 00:21:36.200 "data_offset": 0, 00:21:36.200 "data_size": 7936 00:21:36.200 }, 00:21:36.200 { 00:21:36.200 "name": "pt2", 00:21:36.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:36.200 "is_configured": true, 00:21:36.200 "data_offset": 256, 00:21:36.200 "data_size": 7936 00:21:36.200 } 00:21:36.200 ] 00:21:36.200 }' 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.200 08:53:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.458 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:36.458 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.458 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.458 [2024-11-27 08:53:33.212896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:36.458 [2024-11-27 08:53:33.213083] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:36.458 [2024-11-27 08:53:33.213243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:36.458 [2024-11-27 08:53:33.213326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:36.458 [2024-11-27 08:53:33.213377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.717 [2024-11-27 08:53:33.284920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:36.717 [2024-11-27 08:53:33.285035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.717 [2024-11-27 08:53:33.285068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:36.717 [2024-11-27 08:53:33.285087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.717 [2024-11-27 08:53:33.288244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.717 [2024-11-27 08:53:33.288488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:36.717 [2024-11-27 08:53:33.288584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:36.717 [2024-11-27 08:53:33.288659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:36.717 [2024-11-27 08:53:33.288806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:36.717 [2024-11-27 08:53:33.288829] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:36.717 [2024-11-27 08:53:33.288931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:36.717 [2024-11-27 08:53:33.289097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:36.717 [2024-11-27 08:53:33.289112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:36.717 [2024-11-27 08:53:33.289316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.717 pt2 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.717 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.717 "name": "raid_bdev1", 00:21:36.717 "uuid": "21713a9e-9944-4be9-a437-966ae931900c", 00:21:36.717 "strip_size_kb": 0, 00:21:36.717 "state": "online", 00:21:36.717 "raid_level": "raid1", 00:21:36.717 "superblock": true, 00:21:36.717 "num_base_bdevs": 2, 00:21:36.717 "num_base_bdevs_discovered": 1, 00:21:36.717 "num_base_bdevs_operational": 1, 00:21:36.717 "base_bdevs_list": [ 00:21:36.717 { 00:21:36.717 "name": null, 00:21:36.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.717 "is_configured": false, 00:21:36.717 "data_offset": 256, 00:21:36.717 "data_size": 7936 00:21:36.717 }, 00:21:36.717 { 00:21:36.717 "name": "pt2", 00:21:36.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:36.717 "is_configured": true, 00:21:36.717 "data_offset": 256, 00:21:36.717 "data_size": 7936 00:21:36.717 } 00:21:36.717 ] 00:21:36.718 }' 00:21:36.718 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.718 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.284 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:37.284 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.284 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.284 [2024-11-27 08:53:33.821101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:37.284 [2024-11-27 08:53:33.821144] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:37.284 [2024-11-27 08:53:33.821271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.284 [2024-11-27 08:53:33.821405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:37.284 [2024-11-27 08:53:33.821422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:37.284 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.284 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.284 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:37.284 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.284 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.284 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.285 [2024-11-27 08:53:33.885086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:37.285 [2024-11-27 08:53:33.885176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.285 [2024-11-27 08:53:33.885209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:37.285 [2024-11-27 08:53:33.885240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.285 [2024-11-27 08:53:33.888244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.285 [2024-11-27 08:53:33.888290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:37.285 [2024-11-27 08:53:33.888422] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:37.285 [2024-11-27 08:53:33.888491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:37.285 [2024-11-27 08:53:33.888662] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:37.285 [2024-11-27 08:53:33.888680] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:37.285 [2024-11-27 08:53:33.888703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:37.285 [2024-11-27 08:53:33.888808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:37.285 [2024-11-27 08:53:33.888934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:37.285 [2024-11-27 08:53:33.888951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:37.285 [2024-11-27 08:53:33.889047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:37.285 [2024-11-27 08:53:33.889199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:37.285 [2024-11-27 08:53:33.889220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:37.285 [2024-11-27 08:53:33.889407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.285 pt1 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.285 "name": "raid_bdev1", 00:21:37.285 "uuid": "21713a9e-9944-4be9-a437-966ae931900c", 00:21:37.285 "strip_size_kb": 0, 00:21:37.285 "state": "online", 00:21:37.285 "raid_level": "raid1", 00:21:37.285 "superblock": true, 00:21:37.285 "num_base_bdevs": 2, 00:21:37.285 "num_base_bdevs_discovered": 1, 00:21:37.285 "num_base_bdevs_operational": 1, 00:21:37.285 "base_bdevs_list": [ 00:21:37.285 { 00:21:37.285 "name": null, 00:21:37.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.285 "is_configured": false, 00:21:37.285 "data_offset": 256, 00:21:37.285 "data_size": 7936 00:21:37.285 }, 00:21:37.285 { 00:21:37.285 "name": "pt2", 00:21:37.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:37.285 "is_configured": true, 00:21:37.285 "data_offset": 256, 00:21:37.285 "data_size": 7936 00:21:37.285 } 00:21:37.285 ] 00:21:37.285 }' 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.285 08:53:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.912 [2024-11-27 08:53:34.437938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 21713a9e-9944-4be9-a437-966ae931900c '!=' 21713a9e-9944-4be9-a437-966ae931900c ']' 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88024 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # '[' -z 88024 ']' 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # kill -0 88024 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # uname 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 88024 00:21:37.912 killing process with pid 88024 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # echo 'killing process with pid 88024' 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # kill 88024 00:21:37.912 [2024-11-27 08:53:34.515383] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:37.912 08:53:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@975 -- # wait 88024 00:21:37.912 [2024-11-27 08:53:34.515493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.912 [2024-11-27 08:53:34.515575] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:37.912 [2024-11-27 08:53:34.515602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:38.174 [2024-11-27 08:53:34.720440] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:39.109 08:53:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:21:39.109 00:21:39.109 real 0m6.778s 00:21:39.109 user 0m10.584s 00:21:39.109 sys 0m1.022s 00:21:39.109 ************************************ 00:21:39.109 END TEST raid_superblock_test_md_separate 00:21:39.109 ************************************ 00:21:39.109 08:53:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # xtrace_disable 00:21:39.109 08:53:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:39.368 08:53:35 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:21:39.368 08:53:35 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:21:39.368 08:53:35 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:21:39.368 08:53:35 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:21:39.368 08:53:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:39.368 ************************************ 00:21:39.368 START TEST raid_rebuild_test_sb_md_separate 00:21:39.368 ************************************ 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # raid_rebuild_test raid1 2 true false true 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88349 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88349 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@832 -- # '[' -z 88349 ']' 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local max_retries=100 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@841 -- # xtrace_disable 00:21:39.368 08:53:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:39.368 [2024-11-27 08:53:36.055020] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:21:39.368 [2024-11-27 08:53:36.055459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:21:39.368 Zero copy mechanism will not be used. 00:21:39.368 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88349 ] 00:21:39.627 [2024-11-27 08:53:36.255624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.887 [2024-11-27 08:53:36.404255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.887 [2024-11-27 08:53:36.627740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:39.887 [2024-11-27 08:53:36.628069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:40.455 08:53:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:21:40.455 08:53:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # return 0 00:21:40.455 08:53:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:40.455 08:53:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:21:40.455 08:53:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.455 08:53:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.455 BaseBdev1_malloc 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.455 [2024-11-27 08:53:37.043968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:40.455 [2024-11-27 08:53:37.044056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.455 [2024-11-27 08:53:37.044093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:40.455 [2024-11-27 08:53:37.044113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.455 [2024-11-27 08:53:37.047023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.455 [2024-11-27 08:53:37.047071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:40.455 BaseBdev1 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.455 BaseBdev2_malloc 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.455 [2024-11-27 08:53:37.102123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:40.455 [2024-11-27 08:53:37.102212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.455 [2024-11-27 08:53:37.102245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:40.455 [2024-11-27 08:53:37.102281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.455 [2024-11-27 08:53:37.105223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.455 [2024-11-27 08:53:37.105274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:40.455 BaseBdev2 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.455 spare_malloc 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.455 spare_delay 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.455 [2024-11-27 08:53:37.182142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:40.455 [2024-11-27 08:53:37.182228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.455 [2024-11-27 08:53:37.182282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:40.455 [2024-11-27 08:53:37.182300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.455 [2024-11-27 08:53:37.185218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.455 [2024-11-27 08:53:37.185269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:40.455 spare 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.455 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.456 [2024-11-27 08:53:37.190292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:40.456 [2024-11-27 08:53:37.193009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:40.456 [2024-11-27 08:53:37.193258] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:40.456 [2024-11-27 08:53:37.193282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:40.456 [2024-11-27 08:53:37.193440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:40.456 [2024-11-27 08:53:37.193648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:40.456 [2024-11-27 08:53:37.193664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:40.456 [2024-11-27 08:53:37.193816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.456 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.456 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:40.456 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.456 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:40.456 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:40.456 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:40.456 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:40.456 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.456 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.456 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.456 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.456 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.456 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.456 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.456 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.714 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.714 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.714 "name": "raid_bdev1", 00:21:40.714 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:40.714 "strip_size_kb": 0, 00:21:40.714 "state": "online", 00:21:40.714 "raid_level": "raid1", 00:21:40.714 "superblock": true, 00:21:40.714 "num_base_bdevs": 2, 00:21:40.714 "num_base_bdevs_discovered": 2, 00:21:40.714 "num_base_bdevs_operational": 2, 00:21:40.714 "base_bdevs_list": [ 00:21:40.714 { 00:21:40.714 "name": "BaseBdev1", 00:21:40.714 "uuid": "a9e55c4a-5216-5724-a6fd-a84cbc1f2782", 00:21:40.714 "is_configured": true, 00:21:40.714 "data_offset": 256, 00:21:40.714 "data_size": 7936 00:21:40.714 }, 00:21:40.714 { 00:21:40.714 "name": "BaseBdev2", 00:21:40.714 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:40.714 "is_configured": true, 00:21:40.714 "data_offset": 256, 00:21:40.714 "data_size": 7936 00:21:40.714 } 00:21:40.714 ] 00:21:40.715 }' 00:21:40.715 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.715 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.973 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:40.973 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:40.973 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.973 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.973 [2024-11-27 08:53:37.730853] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:41.229 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.229 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:41.229 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.229 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.229 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.229 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:41.229 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.229 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:41.230 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:41.230 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:41.230 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:41.230 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:41.230 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:41.230 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:41.230 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:41.230 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:41.230 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:41.230 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:41.230 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:41.230 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:41.230 08:53:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:41.488 [2024-11-27 08:53:38.146703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:41.488 /dev/nbd0 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local i 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # break 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.488 1+0 records in 00:21:41.488 1+0 records out 00:21:41.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477044 s, 8.6 MB/s 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # size=4096 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # return 0 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:41.488 08:53:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:42.866 7936+0 records in 00:21:42.866 7936+0 records out 00:21:42.866 32505856 bytes (33 MB, 31 MiB) copied, 0.974914 s, 33.3 MB/s 00:21:42.866 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:42.866 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:42.866 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:42.866 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:42.866 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:42.866 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:42.866 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:42.866 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:42.866 [2024-11-27 08:53:39.491144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.866 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.867 [2024-11-27 08:53:39.503274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.867 "name": "raid_bdev1", 00:21:42.867 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:42.867 "strip_size_kb": 0, 00:21:42.867 "state": "online", 00:21:42.867 "raid_level": "raid1", 00:21:42.867 "superblock": true, 00:21:42.867 "num_base_bdevs": 2, 00:21:42.867 "num_base_bdevs_discovered": 1, 00:21:42.867 "num_base_bdevs_operational": 1, 00:21:42.867 "base_bdevs_list": [ 00:21:42.867 { 00:21:42.867 "name": null, 00:21:42.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.867 "is_configured": false, 00:21:42.867 "data_offset": 0, 00:21:42.867 "data_size": 7936 00:21:42.867 }, 00:21:42.867 { 00:21:42.867 "name": "BaseBdev2", 00:21:42.867 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:42.867 "is_configured": true, 00:21:42.867 "data_offset": 256, 00:21:42.867 "data_size": 7936 00:21:42.867 } 00:21:42.867 ] 00:21:42.867 }' 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.867 08:53:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:43.433 08:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:43.433 08:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.433 08:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:43.433 [2024-11-27 08:53:40.031503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:43.433 [2024-11-27 08:53:40.045702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:21:43.433 08:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.433 08:53:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:43.433 [2024-11-27 08:53:40.048406] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:44.369 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:44.369 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:44.369 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:44.369 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:44.369 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:44.369 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.369 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.369 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:44.369 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.369 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.369 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:44.369 "name": "raid_bdev1", 00:21:44.369 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:44.369 "strip_size_kb": 0, 00:21:44.369 "state": "online", 00:21:44.369 "raid_level": "raid1", 00:21:44.369 "superblock": true, 00:21:44.369 "num_base_bdevs": 2, 00:21:44.369 "num_base_bdevs_discovered": 2, 00:21:44.369 "num_base_bdevs_operational": 2, 00:21:44.369 "process": { 00:21:44.369 "type": "rebuild", 00:21:44.369 "target": "spare", 00:21:44.369 "progress": { 00:21:44.369 "blocks": 2304, 00:21:44.369 "percent": 29 00:21:44.369 } 00:21:44.369 }, 00:21:44.369 "base_bdevs_list": [ 00:21:44.369 { 00:21:44.369 "name": "spare", 00:21:44.369 "uuid": "f54aadb5-303c-5cac-b56a-e685b6038f83", 00:21:44.369 "is_configured": true, 00:21:44.369 "data_offset": 256, 00:21:44.369 "data_size": 7936 00:21:44.369 }, 00:21:44.369 { 00:21:44.369 "name": "BaseBdev2", 00:21:44.369 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:44.369 "is_configured": true, 00:21:44.369 "data_offset": 256, 00:21:44.369 "data_size": 7936 00:21:44.369 } 00:21:44.369 ] 00:21:44.369 }' 00:21:44.369 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:44.628 [2024-11-27 08:53:41.246367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.628 [2024-11-27 08:53:41.260653] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:44.628 [2024-11-27 08:53:41.260762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.628 [2024-11-27 08:53:41.260787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.628 [2024-11-27 08:53:41.260803] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.628 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.629 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.629 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.629 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:44.629 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.629 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.629 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.629 "name": "raid_bdev1", 00:21:44.629 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:44.629 "strip_size_kb": 0, 00:21:44.629 "state": "online", 00:21:44.629 "raid_level": "raid1", 00:21:44.629 "superblock": true, 00:21:44.629 "num_base_bdevs": 2, 00:21:44.629 "num_base_bdevs_discovered": 1, 00:21:44.629 "num_base_bdevs_operational": 1, 00:21:44.629 "base_bdevs_list": [ 00:21:44.629 { 00:21:44.629 "name": null, 00:21:44.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.629 "is_configured": false, 00:21:44.629 "data_offset": 0, 00:21:44.629 "data_size": 7936 00:21:44.629 }, 00:21:44.629 { 00:21:44.629 "name": "BaseBdev2", 00:21:44.629 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:44.629 "is_configured": true, 00:21:44.629 "data_offset": 256, 00:21:44.629 "data_size": 7936 00:21:44.629 } 00:21:44.629 ] 00:21:44.629 }' 00:21:44.629 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.629 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:45.196 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:45.196 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:45.196 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:45.196 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:45.196 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:45.196 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.196 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.196 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.196 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:45.196 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.196 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:45.196 "name": "raid_bdev1", 00:21:45.196 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:45.196 "strip_size_kb": 0, 00:21:45.196 "state": "online", 00:21:45.196 "raid_level": "raid1", 00:21:45.196 "superblock": true, 00:21:45.196 "num_base_bdevs": 2, 00:21:45.196 "num_base_bdevs_discovered": 1, 00:21:45.196 "num_base_bdevs_operational": 1, 00:21:45.196 "base_bdevs_list": [ 00:21:45.196 { 00:21:45.196 "name": null, 00:21:45.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.196 "is_configured": false, 00:21:45.196 "data_offset": 0, 00:21:45.196 "data_size": 7936 00:21:45.196 }, 00:21:45.196 { 00:21:45.196 "name": "BaseBdev2", 00:21:45.196 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:45.196 "is_configured": true, 00:21:45.196 "data_offset": 256, 00:21:45.196 "data_size": 7936 00:21:45.196 } 00:21:45.196 ] 00:21:45.196 }' 00:21:45.196 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:45.196 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:45.196 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:45.454 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:45.454 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:45.454 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.454 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:45.454 [2024-11-27 08:53:41.961063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:45.454 [2024-11-27 08:53:41.974197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:45.454 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.454 08:53:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:45.454 [2024-11-27 08:53:41.976963] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:46.393 08:53:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:46.393 08:53:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:46.393 08:53:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:46.393 08:53:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:46.393 08:53:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:46.393 08:53:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.393 08:53:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.393 08:53:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.393 08:53:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:46.393 08:53:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.393 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:46.393 "name": "raid_bdev1", 00:21:46.393 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:46.393 "strip_size_kb": 0, 00:21:46.393 "state": "online", 00:21:46.394 "raid_level": "raid1", 00:21:46.394 "superblock": true, 00:21:46.394 "num_base_bdevs": 2, 00:21:46.394 "num_base_bdevs_discovered": 2, 00:21:46.394 "num_base_bdevs_operational": 2, 00:21:46.394 "process": { 00:21:46.394 "type": "rebuild", 00:21:46.394 "target": "spare", 00:21:46.394 "progress": { 00:21:46.394 "blocks": 2560, 00:21:46.394 "percent": 32 00:21:46.394 } 00:21:46.394 }, 00:21:46.394 "base_bdevs_list": [ 00:21:46.394 { 00:21:46.394 "name": "spare", 00:21:46.394 "uuid": "f54aadb5-303c-5cac-b56a-e685b6038f83", 00:21:46.394 "is_configured": true, 00:21:46.394 "data_offset": 256, 00:21:46.394 "data_size": 7936 00:21:46.394 }, 00:21:46.394 { 00:21:46.394 "name": "BaseBdev2", 00:21:46.394 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:46.394 "is_configured": true, 00:21:46.394 "data_offset": 256, 00:21:46.394 "data_size": 7936 00:21:46.394 } 00:21:46.394 ] 00:21:46.394 }' 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:46.394 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=777 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:46.394 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.653 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:46.653 "name": "raid_bdev1", 00:21:46.653 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:46.653 "strip_size_kb": 0, 00:21:46.653 "state": "online", 00:21:46.653 "raid_level": "raid1", 00:21:46.653 "superblock": true, 00:21:46.653 "num_base_bdevs": 2, 00:21:46.653 "num_base_bdevs_discovered": 2, 00:21:46.653 "num_base_bdevs_operational": 2, 00:21:46.653 "process": { 00:21:46.653 "type": "rebuild", 00:21:46.653 "target": "spare", 00:21:46.653 "progress": { 00:21:46.653 "blocks": 2816, 00:21:46.653 "percent": 35 00:21:46.653 } 00:21:46.653 }, 00:21:46.653 "base_bdevs_list": [ 00:21:46.653 { 00:21:46.653 "name": "spare", 00:21:46.653 "uuid": "f54aadb5-303c-5cac-b56a-e685b6038f83", 00:21:46.653 "is_configured": true, 00:21:46.653 "data_offset": 256, 00:21:46.653 "data_size": 7936 00:21:46.653 }, 00:21:46.653 { 00:21:46.653 "name": "BaseBdev2", 00:21:46.653 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:46.653 "is_configured": true, 00:21:46.653 "data_offset": 256, 00:21:46.653 "data_size": 7936 00:21:46.653 } 00:21:46.653 ] 00:21:46.653 }' 00:21:46.653 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:46.653 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:46.653 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:46.653 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:46.653 08:53:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:47.588 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:47.588 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:47.588 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:47.588 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:47.588 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:47.588 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:47.588 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.588 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.588 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.588 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:47.588 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.846 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:47.846 "name": "raid_bdev1", 00:21:47.846 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:47.846 "strip_size_kb": 0, 00:21:47.846 "state": "online", 00:21:47.846 "raid_level": "raid1", 00:21:47.846 "superblock": true, 00:21:47.846 "num_base_bdevs": 2, 00:21:47.847 "num_base_bdevs_discovered": 2, 00:21:47.847 "num_base_bdevs_operational": 2, 00:21:47.847 "process": { 00:21:47.847 "type": "rebuild", 00:21:47.847 "target": "spare", 00:21:47.847 "progress": { 00:21:47.847 "blocks": 5632, 00:21:47.847 "percent": 70 00:21:47.847 } 00:21:47.847 }, 00:21:47.847 "base_bdevs_list": [ 00:21:47.847 { 00:21:47.847 "name": "spare", 00:21:47.847 "uuid": "f54aadb5-303c-5cac-b56a-e685b6038f83", 00:21:47.847 "is_configured": true, 00:21:47.847 "data_offset": 256, 00:21:47.847 "data_size": 7936 00:21:47.847 }, 00:21:47.847 { 00:21:47.847 "name": "BaseBdev2", 00:21:47.847 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:47.847 "is_configured": true, 00:21:47.847 "data_offset": 256, 00:21:47.847 "data_size": 7936 00:21:47.847 } 00:21:47.847 ] 00:21:47.847 }' 00:21:47.847 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:47.847 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:47.847 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:47.847 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:47.847 08:53:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:48.414 [2024-11-27 08:53:45.106270] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:48.414 [2024-11-27 08:53:45.106423] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:48.414 [2024-11-27 08:53:45.106645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:48.981 "name": "raid_bdev1", 00:21:48.981 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:48.981 "strip_size_kb": 0, 00:21:48.981 "state": "online", 00:21:48.981 "raid_level": "raid1", 00:21:48.981 "superblock": true, 00:21:48.981 "num_base_bdevs": 2, 00:21:48.981 "num_base_bdevs_discovered": 2, 00:21:48.981 "num_base_bdevs_operational": 2, 00:21:48.981 "base_bdevs_list": [ 00:21:48.981 { 00:21:48.981 "name": "spare", 00:21:48.981 "uuid": "f54aadb5-303c-5cac-b56a-e685b6038f83", 00:21:48.981 "is_configured": true, 00:21:48.981 "data_offset": 256, 00:21:48.981 "data_size": 7936 00:21:48.981 }, 00:21:48.981 { 00:21:48.981 "name": "BaseBdev2", 00:21:48.981 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:48.981 "is_configured": true, 00:21:48.981 "data_offset": 256, 00:21:48.981 "data_size": 7936 00:21:48.981 } 00:21:48.981 ] 00:21:48.981 }' 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:48.981 "name": "raid_bdev1", 00:21:48.981 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:48.981 "strip_size_kb": 0, 00:21:48.981 "state": "online", 00:21:48.981 "raid_level": "raid1", 00:21:48.981 "superblock": true, 00:21:48.981 "num_base_bdevs": 2, 00:21:48.981 "num_base_bdevs_discovered": 2, 00:21:48.981 "num_base_bdevs_operational": 2, 00:21:48.981 "base_bdevs_list": [ 00:21:48.981 { 00:21:48.981 "name": "spare", 00:21:48.981 "uuid": "f54aadb5-303c-5cac-b56a-e685b6038f83", 00:21:48.981 "is_configured": true, 00:21:48.981 "data_offset": 256, 00:21:48.981 "data_size": 7936 00:21:48.981 }, 00:21:48.981 { 00:21:48.981 "name": "BaseBdev2", 00:21:48.981 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:48.981 "is_configured": true, 00:21:48.981 "data_offset": 256, 00:21:48.981 "data_size": 7936 00:21:48.981 } 00:21:48.981 ] 00:21:48.981 }' 00:21:48.981 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.256 "name": "raid_bdev1", 00:21:49.256 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:49.256 "strip_size_kb": 0, 00:21:49.256 "state": "online", 00:21:49.256 "raid_level": "raid1", 00:21:49.256 "superblock": true, 00:21:49.256 "num_base_bdevs": 2, 00:21:49.256 "num_base_bdevs_discovered": 2, 00:21:49.256 "num_base_bdevs_operational": 2, 00:21:49.256 "base_bdevs_list": [ 00:21:49.256 { 00:21:49.256 "name": "spare", 00:21:49.256 "uuid": "f54aadb5-303c-5cac-b56a-e685b6038f83", 00:21:49.256 "is_configured": true, 00:21:49.256 "data_offset": 256, 00:21:49.256 "data_size": 7936 00:21:49.256 }, 00:21:49.256 { 00:21:49.256 "name": "BaseBdev2", 00:21:49.256 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:49.256 "is_configured": true, 00:21:49.256 "data_offset": 256, 00:21:49.256 "data_size": 7936 00:21:49.256 } 00:21:49.256 ] 00:21:49.256 }' 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.256 08:53:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:49.823 [2024-11-27 08:53:46.383109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:49.823 [2024-11-27 08:53:46.383293] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:49.823 [2024-11-27 08:53:46.383592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:49.823 [2024-11-27 08:53:46.383708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:49.823 [2024-11-27 08:53:46.383727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:49.823 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:50.081 /dev/nbd0 00:21:50.081 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:50.081 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:50.081 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:21:50.081 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local i 00:21:50.081 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:50.081 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:50.081 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:21:50.081 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # break 00:21:50.081 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:21:50.081 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:21:50.081 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:50.081 1+0 records in 00:21:50.081 1+0 records out 00:21:50.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344908 s, 11.9 MB/s 00:21:50.081 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:50.081 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # size=4096 00:21:50.082 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:50.082 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:21:50.082 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # return 0 00:21:50.082 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:50.082 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:50.082 08:53:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:50.340 /dev/nbd1 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local i 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # break 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:50.340 1+0 records in 00:21:50.340 1+0 records out 00:21:50.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291181 s, 14.1 MB/s 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # size=4096 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # return 0 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:50.340 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:50.598 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:50.598 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:50.598 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:50.598 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:50.598 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:50.598 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:50.598 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:50.857 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:50.857 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:50.857 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:50.857 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:50.857 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:50.857 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:50.857 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:50.857 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:50.857 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:50.857 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.116 [2024-11-27 08:53:47.821808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:51.116 [2024-11-27 08:53:47.821893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.116 [2024-11-27 08:53:47.821928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:51.116 [2024-11-27 08:53:47.821944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.116 [2024-11-27 08:53:47.824831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.116 [2024-11-27 08:53:47.824876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:51.116 [2024-11-27 08:53:47.824963] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:51.116 [2024-11-27 08:53:47.825031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:51.116 [2024-11-27 08:53:47.825199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:51.116 spare 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.116 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.375 [2024-11-27 08:53:47.925340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:51.375 [2024-11-27 08:53:47.925398] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:51.376 [2024-11-27 08:53:47.925554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:51.376 [2024-11-27 08:53:47.925767] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:51.376 [2024-11-27 08:53:47.925783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:51.376 [2024-11-27 08:53:47.925973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.376 "name": "raid_bdev1", 00:21:51.376 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:51.376 "strip_size_kb": 0, 00:21:51.376 "state": "online", 00:21:51.376 "raid_level": "raid1", 00:21:51.376 "superblock": true, 00:21:51.376 "num_base_bdevs": 2, 00:21:51.376 "num_base_bdevs_discovered": 2, 00:21:51.376 "num_base_bdevs_operational": 2, 00:21:51.376 "base_bdevs_list": [ 00:21:51.376 { 00:21:51.376 "name": "spare", 00:21:51.376 "uuid": "f54aadb5-303c-5cac-b56a-e685b6038f83", 00:21:51.376 "is_configured": true, 00:21:51.376 "data_offset": 256, 00:21:51.376 "data_size": 7936 00:21:51.376 }, 00:21:51.376 { 00:21:51.376 "name": "BaseBdev2", 00:21:51.376 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:51.376 "is_configured": true, 00:21:51.376 "data_offset": 256, 00:21:51.376 "data_size": 7936 00:21:51.376 } 00:21:51.376 ] 00:21:51.376 }' 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.376 08:53:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.943 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:51.943 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:51.943 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:51.943 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:51.943 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:51.943 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.943 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.943 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.943 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.943 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.943 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:51.943 "name": "raid_bdev1", 00:21:51.943 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:51.943 "strip_size_kb": 0, 00:21:51.943 "state": "online", 00:21:51.943 "raid_level": "raid1", 00:21:51.943 "superblock": true, 00:21:51.943 "num_base_bdevs": 2, 00:21:51.943 "num_base_bdevs_discovered": 2, 00:21:51.943 "num_base_bdevs_operational": 2, 00:21:51.943 "base_bdevs_list": [ 00:21:51.943 { 00:21:51.943 "name": "spare", 00:21:51.943 "uuid": "f54aadb5-303c-5cac-b56a-e685b6038f83", 00:21:51.943 "is_configured": true, 00:21:51.943 "data_offset": 256, 00:21:51.944 "data_size": 7936 00:21:51.944 }, 00:21:51.944 { 00:21:51.944 "name": "BaseBdev2", 00:21:51.944 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:51.944 "is_configured": true, 00:21:51.944 "data_offset": 256, 00:21:51.944 "data_size": 7936 00:21:51.944 } 00:21:51.944 ] 00:21:51.944 }' 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.944 [2024-11-27 08:53:48.670211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:51.944 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.203 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.203 "name": "raid_bdev1", 00:21:52.203 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:52.203 "strip_size_kb": 0, 00:21:52.203 "state": "online", 00:21:52.203 "raid_level": "raid1", 00:21:52.203 "superblock": true, 00:21:52.203 "num_base_bdevs": 2, 00:21:52.203 "num_base_bdevs_discovered": 1, 00:21:52.203 "num_base_bdevs_operational": 1, 00:21:52.203 "base_bdevs_list": [ 00:21:52.203 { 00:21:52.203 "name": null, 00:21:52.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.203 "is_configured": false, 00:21:52.203 "data_offset": 0, 00:21:52.203 "data_size": 7936 00:21:52.203 }, 00:21:52.203 { 00:21:52.203 "name": "BaseBdev2", 00:21:52.203 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:52.203 "is_configured": true, 00:21:52.203 "data_offset": 256, 00:21:52.203 "data_size": 7936 00:21:52.203 } 00:21:52.203 ] 00:21:52.203 }' 00:21:52.203 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.203 08:53:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:52.461 08:53:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:52.461 08:53:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.461 08:53:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:52.461 [2024-11-27 08:53:49.182396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:52.461 [2024-11-27 08:53:49.182696] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:52.461 [2024-11-27 08:53:49.182723] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:52.461 [2024-11-27 08:53:49.182774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:52.461 [2024-11-27 08:53:49.195613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:52.461 08:53:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.462 08:53:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:52.462 [2024-11-27 08:53:49.198399] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.876 "name": "raid_bdev1", 00:21:53.876 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:53.876 "strip_size_kb": 0, 00:21:53.876 "state": "online", 00:21:53.876 "raid_level": "raid1", 00:21:53.876 "superblock": true, 00:21:53.876 "num_base_bdevs": 2, 00:21:53.876 "num_base_bdevs_discovered": 2, 00:21:53.876 "num_base_bdevs_operational": 2, 00:21:53.876 "process": { 00:21:53.876 "type": "rebuild", 00:21:53.876 "target": "spare", 00:21:53.876 "progress": { 00:21:53.876 "blocks": 2560, 00:21:53.876 "percent": 32 00:21:53.876 } 00:21:53.876 }, 00:21:53.876 "base_bdevs_list": [ 00:21:53.876 { 00:21:53.876 "name": "spare", 00:21:53.876 "uuid": "f54aadb5-303c-5cac-b56a-e685b6038f83", 00:21:53.876 "is_configured": true, 00:21:53.876 "data_offset": 256, 00:21:53.876 "data_size": 7936 00:21:53.876 }, 00:21:53.876 { 00:21:53.876 "name": "BaseBdev2", 00:21:53.876 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:53.876 "is_configured": true, 00:21:53.876 "data_offset": 256, 00:21:53.876 "data_size": 7936 00:21:53.876 } 00:21:53.876 ] 00:21:53.876 }' 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:53.876 [2024-11-27 08:53:50.392138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:53.876 [2024-11-27 08:53:50.409759] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:53.876 [2024-11-27 08:53:50.409843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:53.876 [2024-11-27 08:53:50.409867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:53.876 [2024-11-27 08:53:50.409894] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.876 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.877 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.877 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.877 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.877 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.877 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.877 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:53.877 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.877 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:53.877 "name": "raid_bdev1", 00:21:53.877 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:53.877 "strip_size_kb": 0, 00:21:53.877 "state": "online", 00:21:53.877 "raid_level": "raid1", 00:21:53.877 "superblock": true, 00:21:53.877 "num_base_bdevs": 2, 00:21:53.877 "num_base_bdevs_discovered": 1, 00:21:53.877 "num_base_bdevs_operational": 1, 00:21:53.877 "base_bdevs_list": [ 00:21:53.877 { 00:21:53.877 "name": null, 00:21:53.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.877 "is_configured": false, 00:21:53.877 "data_offset": 0, 00:21:53.877 "data_size": 7936 00:21:53.877 }, 00:21:53.877 { 00:21:53.877 "name": "BaseBdev2", 00:21:53.877 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:53.877 "is_configured": true, 00:21:53.877 "data_offset": 256, 00:21:53.877 "data_size": 7936 00:21:53.877 } 00:21:53.877 ] 00:21:53.877 }' 00:21:53.877 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:53.877 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:54.445 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:54.445 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.445 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:54.445 [2024-11-27 08:53:50.957551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:54.445 [2024-11-27 08:53:50.957788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.445 [2024-11-27 08:53:50.957838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:54.445 [2024-11-27 08:53:50.957860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.445 [2024-11-27 08:53:50.958231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.445 [2024-11-27 08:53:50.958262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:54.445 [2024-11-27 08:53:50.958372] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:54.445 [2024-11-27 08:53:50.958411] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:54.445 [2024-11-27 08:53:50.958428] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:54.445 [2024-11-27 08:53:50.958478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:54.445 [2024-11-27 08:53:50.971351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:21:54.445 spare 00:21:54.445 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.445 08:53:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:54.445 [2024-11-27 08:53:50.973930] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:55.379 08:53:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.379 08:53:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.379 08:53:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.379 08:53:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.379 08:53:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.379 08:53:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.379 08:53:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.380 08:53:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.380 08:53:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:55.380 08:53:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.380 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.380 "name": "raid_bdev1", 00:21:55.380 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:55.380 "strip_size_kb": 0, 00:21:55.380 "state": "online", 00:21:55.380 "raid_level": "raid1", 00:21:55.380 "superblock": true, 00:21:55.380 "num_base_bdevs": 2, 00:21:55.380 "num_base_bdevs_discovered": 2, 00:21:55.380 "num_base_bdevs_operational": 2, 00:21:55.380 "process": { 00:21:55.380 "type": "rebuild", 00:21:55.380 "target": "spare", 00:21:55.380 "progress": { 00:21:55.380 "blocks": 2560, 00:21:55.380 "percent": 32 00:21:55.380 } 00:21:55.380 }, 00:21:55.380 "base_bdevs_list": [ 00:21:55.380 { 00:21:55.380 "name": "spare", 00:21:55.380 "uuid": "f54aadb5-303c-5cac-b56a-e685b6038f83", 00:21:55.380 "is_configured": true, 00:21:55.380 "data_offset": 256, 00:21:55.380 "data_size": 7936 00:21:55.380 }, 00:21:55.380 { 00:21:55.380 "name": "BaseBdev2", 00:21:55.380 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:55.380 "is_configured": true, 00:21:55.380 "data_offset": 256, 00:21:55.380 "data_size": 7936 00:21:55.380 } 00:21:55.380 ] 00:21:55.380 }' 00:21:55.380 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.380 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.380 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:55.638 [2024-11-27 08:53:52.152433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:55.638 [2024-11-27 08:53:52.185519] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:55.638 [2024-11-27 08:53:52.185599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:55.638 [2024-11-27 08:53:52.185628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:55.638 [2024-11-27 08:53:52.185640] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.638 "name": "raid_bdev1", 00:21:55.638 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:55.638 "strip_size_kb": 0, 00:21:55.638 "state": "online", 00:21:55.638 "raid_level": "raid1", 00:21:55.638 "superblock": true, 00:21:55.638 "num_base_bdevs": 2, 00:21:55.638 "num_base_bdevs_discovered": 1, 00:21:55.638 "num_base_bdevs_operational": 1, 00:21:55.638 "base_bdevs_list": [ 00:21:55.638 { 00:21:55.638 "name": null, 00:21:55.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.638 "is_configured": false, 00:21:55.638 "data_offset": 0, 00:21:55.638 "data_size": 7936 00:21:55.638 }, 00:21:55.638 { 00:21:55.638 "name": "BaseBdev2", 00:21:55.638 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:55.638 "is_configured": true, 00:21:55.638 "data_offset": 256, 00:21:55.638 "data_size": 7936 00:21:55.638 } 00:21:55.638 ] 00:21:55.638 }' 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.638 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:56.204 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:56.204 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:56.204 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:56.204 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:56.205 "name": "raid_bdev1", 00:21:56.205 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:56.205 "strip_size_kb": 0, 00:21:56.205 "state": "online", 00:21:56.205 "raid_level": "raid1", 00:21:56.205 "superblock": true, 00:21:56.205 "num_base_bdevs": 2, 00:21:56.205 "num_base_bdevs_discovered": 1, 00:21:56.205 "num_base_bdevs_operational": 1, 00:21:56.205 "base_bdevs_list": [ 00:21:56.205 { 00:21:56.205 "name": null, 00:21:56.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.205 "is_configured": false, 00:21:56.205 "data_offset": 0, 00:21:56.205 "data_size": 7936 00:21:56.205 }, 00:21:56.205 { 00:21:56.205 "name": "BaseBdev2", 00:21:56.205 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:56.205 "is_configured": true, 00:21:56.205 "data_offset": 256, 00:21:56.205 "data_size": 7936 00:21:56.205 } 00:21:56.205 ] 00:21:56.205 }' 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:56.205 [2024-11-27 08:53:52.833391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:56.205 [2024-11-27 08:53:52.833604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.205 [2024-11-27 08:53:52.833655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:56.205 [2024-11-27 08:53:52.833673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.205 [2024-11-27 08:53:52.834004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.205 [2024-11-27 08:53:52.834027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:56.205 [2024-11-27 08:53:52.834107] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:56.205 [2024-11-27 08:53:52.834133] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:56.205 [2024-11-27 08:53:52.834148] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:56.205 [2024-11-27 08:53:52.834163] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:56.205 BaseBdev1 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.205 08:53:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.143 "name": "raid_bdev1", 00:21:57.143 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:57.143 "strip_size_kb": 0, 00:21:57.143 "state": "online", 00:21:57.143 "raid_level": "raid1", 00:21:57.143 "superblock": true, 00:21:57.143 "num_base_bdevs": 2, 00:21:57.143 "num_base_bdevs_discovered": 1, 00:21:57.143 "num_base_bdevs_operational": 1, 00:21:57.143 "base_bdevs_list": [ 00:21:57.143 { 00:21:57.143 "name": null, 00:21:57.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.143 "is_configured": false, 00:21:57.143 "data_offset": 0, 00:21:57.143 "data_size": 7936 00:21:57.143 }, 00:21:57.143 { 00:21:57.143 "name": "BaseBdev2", 00:21:57.143 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:57.143 "is_configured": true, 00:21:57.143 "data_offset": 256, 00:21:57.143 "data_size": 7936 00:21:57.143 } 00:21:57.143 ] 00:21:57.143 }' 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.143 08:53:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:57.711 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:57.711 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:57.711 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:57.711 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:57.711 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:57.711 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.711 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.711 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.711 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:57.711 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.711 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:57.711 "name": "raid_bdev1", 00:21:57.711 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:57.711 "strip_size_kb": 0, 00:21:57.711 "state": "online", 00:21:57.711 "raid_level": "raid1", 00:21:57.711 "superblock": true, 00:21:57.711 "num_base_bdevs": 2, 00:21:57.711 "num_base_bdevs_discovered": 1, 00:21:57.711 "num_base_bdevs_operational": 1, 00:21:57.711 "base_bdevs_list": [ 00:21:57.711 { 00:21:57.711 "name": null, 00:21:57.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.711 "is_configured": false, 00:21:57.711 "data_offset": 0, 00:21:57.711 "data_size": 7936 00:21:57.711 }, 00:21:57.711 { 00:21:57.711 "name": "BaseBdev2", 00:21:57.711 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:57.711 "is_configured": true, 00:21:57.711 "data_offset": 256, 00:21:57.711 "data_size": 7936 00:21:57.711 } 00:21:57.711 ] 00:21:57.711 }' 00:21:57.711 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:57.711 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:57.711 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:57.969 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:57.969 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:57.969 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:21:57.969 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:57.969 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:57.969 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.969 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:57.969 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.969 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:57.969 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.969 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:57.969 [2024-11-27 08:53:54.481926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:57.969 [2024-11-27 08:53:54.482188] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:57.969 [2024-11-27 08:53:54.482213] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:57.969 request: 00:21:57.969 { 00:21:57.969 "base_bdev": "BaseBdev1", 00:21:57.969 "raid_bdev": "raid_bdev1", 00:21:57.969 "method": "bdev_raid_add_base_bdev", 00:21:57.969 "req_id": 1 00:21:57.969 } 00:21:57.969 Got JSON-RPC error response 00:21:57.969 response: 00:21:57.969 { 00:21:57.969 "code": -22, 00:21:57.969 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:57.969 } 00:21:57.969 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:57.969 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:21:57.969 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.969 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.970 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.970 08:53:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.907 "name": "raid_bdev1", 00:21:58.907 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:58.907 "strip_size_kb": 0, 00:21:58.907 "state": "online", 00:21:58.907 "raid_level": "raid1", 00:21:58.907 "superblock": true, 00:21:58.907 "num_base_bdevs": 2, 00:21:58.907 "num_base_bdevs_discovered": 1, 00:21:58.907 "num_base_bdevs_operational": 1, 00:21:58.907 "base_bdevs_list": [ 00:21:58.907 { 00:21:58.907 "name": null, 00:21:58.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.907 "is_configured": false, 00:21:58.907 "data_offset": 0, 00:21:58.907 "data_size": 7936 00:21:58.907 }, 00:21:58.907 { 00:21:58.907 "name": "BaseBdev2", 00:21:58.907 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:58.907 "is_configured": true, 00:21:58.907 "data_offset": 256, 00:21:58.907 "data_size": 7936 00:21:58.907 } 00:21:58.907 ] 00:21:58.907 }' 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.907 08:53:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:59.476 "name": "raid_bdev1", 00:21:59.476 "uuid": "57e9672e-64e3-4e86-82a5-4bdac8163785", 00:21:59.476 "strip_size_kb": 0, 00:21:59.476 "state": "online", 00:21:59.476 "raid_level": "raid1", 00:21:59.476 "superblock": true, 00:21:59.476 "num_base_bdevs": 2, 00:21:59.476 "num_base_bdevs_discovered": 1, 00:21:59.476 "num_base_bdevs_operational": 1, 00:21:59.476 "base_bdevs_list": [ 00:21:59.476 { 00:21:59.476 "name": null, 00:21:59.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.476 "is_configured": false, 00:21:59.476 "data_offset": 0, 00:21:59.476 "data_size": 7936 00:21:59.476 }, 00:21:59.476 { 00:21:59.476 "name": "BaseBdev2", 00:21:59.476 "uuid": "fc7c9229-a1fb-587d-a30d-3dcae4e58241", 00:21:59.476 "is_configured": true, 00:21:59.476 "data_offset": 256, 00:21:59.476 "data_size": 7936 00:21:59.476 } 00:21:59.476 ] 00:21:59.476 }' 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88349 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' -z 88349 ']' 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # kill -0 88349 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # uname 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 88349 00:21:59.476 killing process with pid 88349 00:21:59.476 Received shutdown signal, test time was about 60.000000 seconds 00:21:59.476 00:21:59.476 Latency(us) 00:21:59.476 [2024-11-27T08:53:56.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.476 [2024-11-27T08:53:56.236Z] =================================================================================================================== 00:21:59.476 [2024-11-27T08:53:56.236Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # echo 'killing process with pid 88349' 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # kill 88349 00:21:59.476 [2024-11-27 08:53:56.220050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:59.476 08:53:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@975 -- # wait 88349 00:21:59.476 [2024-11-27 08:53:56.220235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.476 [2024-11-27 08:53:56.220308] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:59.476 [2024-11-27 08:53:56.220328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:00.043 [2024-11-27 08:53:56.522062] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:00.980 08:53:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:22:00.980 00:22:00.980 real 0m21.709s 00:22:00.980 user 0m29.324s 00:22:00.980 sys 0m2.616s 00:22:00.980 08:53:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # xtrace_disable 00:22:00.980 ************************************ 00:22:00.980 END TEST raid_rebuild_test_sb_md_separate 00:22:00.980 ************************************ 00:22:00.980 08:53:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:00.980 08:53:57 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:22:00.980 08:53:57 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:22:00.980 08:53:57 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:22:00.980 08:53:57 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:22:00.980 08:53:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:00.980 ************************************ 00:22:00.980 START TEST raid_state_function_test_sb_md_interleaved 00:22:00.980 ************************************ 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # raid_state_function_test raid1 2 true 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:00.981 Process raid pid: 89057 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89057 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89057' 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89057 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # '[' -z 89057 ']' 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local max_retries=100 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@841 -- # xtrace_disable 00:22:00.981 08:53:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.239 [2024-11-27 08:53:57.797346] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:22:01.239 [2024-11-27 08:53:57.797542] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.239 [2024-11-27 08:53:57.985057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.522 [2024-11-27 08:53:58.131271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.784 [2024-11-27 08:53:58.357482] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:01.784 [2024-11-27 08:53:58.357553] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@865 -- # return 0 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.043 [2024-11-27 08:53:58.749721] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:02.043 [2024-11-27 08:53:58.749822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:02.043 [2024-11-27 08:53:58.749841] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:02.043 [2024-11-27 08:53:58.749859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.043 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.301 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.301 "name": "Existed_Raid", 00:22:02.301 "uuid": "d1a3f3e8-e003-4556-827e-69c24e403586", 00:22:02.301 "strip_size_kb": 0, 00:22:02.301 "state": "configuring", 00:22:02.301 "raid_level": "raid1", 00:22:02.301 "superblock": true, 00:22:02.301 "num_base_bdevs": 2, 00:22:02.301 "num_base_bdevs_discovered": 0, 00:22:02.301 "num_base_bdevs_operational": 2, 00:22:02.301 "base_bdevs_list": [ 00:22:02.301 { 00:22:02.301 "name": "BaseBdev1", 00:22:02.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.301 "is_configured": false, 00:22:02.301 "data_offset": 0, 00:22:02.301 "data_size": 0 00:22:02.301 }, 00:22:02.301 { 00:22:02.301 "name": "BaseBdev2", 00:22:02.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.301 "is_configured": false, 00:22:02.301 "data_offset": 0, 00:22:02.301 "data_size": 0 00:22:02.301 } 00:22:02.301 ] 00:22:02.301 }' 00:22:02.301 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.301 08:53:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.559 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:02.560 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.560 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.560 [2024-11-27 08:53:59.265781] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:02.560 [2024-11-27 08:53:59.265972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:02.560 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.560 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:02.560 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.560 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.560 [2024-11-27 08:53:59.273735] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:02.560 [2024-11-27 08:53:59.273788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:02.560 [2024-11-27 08:53:59.273805] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:02.560 [2024-11-27 08:53:59.273824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:02.560 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.560 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:22:02.560 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.561 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.823 [2024-11-27 08:53:59.321748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:02.823 BaseBdev1 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev1 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local i 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.823 [ 00:22:02.823 { 00:22:02.823 "name": "BaseBdev1", 00:22:02.823 "aliases": [ 00:22:02.823 "4b91cad9-f741-4230-8f1c-d4f081f89471" 00:22:02.823 ], 00:22:02.823 "product_name": "Malloc disk", 00:22:02.823 "block_size": 4128, 00:22:02.823 "num_blocks": 8192, 00:22:02.823 "uuid": "4b91cad9-f741-4230-8f1c-d4f081f89471", 00:22:02.823 "md_size": 32, 00:22:02.823 "md_interleave": true, 00:22:02.823 "dif_type": 0, 00:22:02.823 "assigned_rate_limits": { 00:22:02.823 "rw_ios_per_sec": 0, 00:22:02.823 "rw_mbytes_per_sec": 0, 00:22:02.823 "r_mbytes_per_sec": 0, 00:22:02.823 "w_mbytes_per_sec": 0 00:22:02.823 }, 00:22:02.823 "claimed": true, 00:22:02.823 "claim_type": "exclusive_write", 00:22:02.823 "zoned": false, 00:22:02.823 "supported_io_types": { 00:22:02.823 "read": true, 00:22:02.823 "write": true, 00:22:02.823 "unmap": true, 00:22:02.823 "flush": true, 00:22:02.823 "reset": true, 00:22:02.823 "nvme_admin": false, 00:22:02.823 "nvme_io": false, 00:22:02.823 "nvme_io_md": false, 00:22:02.823 "write_zeroes": true, 00:22:02.823 "zcopy": true, 00:22:02.823 "get_zone_info": false, 00:22:02.823 "zone_management": false, 00:22:02.823 "zone_append": false, 00:22:02.823 "compare": false, 00:22:02.823 "compare_and_write": false, 00:22:02.823 "abort": true, 00:22:02.823 "seek_hole": false, 00:22:02.823 "seek_data": false, 00:22:02.823 "copy": true, 00:22:02.823 "nvme_iov_md": false 00:22:02.823 }, 00:22:02.823 "memory_domains": [ 00:22:02.823 { 00:22:02.823 "dma_device_id": "system", 00:22:02.823 "dma_device_type": 1 00:22:02.823 }, 00:22:02.823 { 00:22:02.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:02.823 "dma_device_type": 2 00:22:02.823 } 00:22:02.823 ], 00:22:02.823 "driver_specific": {} 00:22:02.823 } 00:22:02.823 ] 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # return 0 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.823 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.823 "name": "Existed_Raid", 00:22:02.823 "uuid": "df0f0c1e-7a7c-4a02-b53e-60c36bb99757", 00:22:02.823 "strip_size_kb": 0, 00:22:02.823 "state": "configuring", 00:22:02.823 "raid_level": "raid1", 00:22:02.823 "superblock": true, 00:22:02.823 "num_base_bdevs": 2, 00:22:02.823 "num_base_bdevs_discovered": 1, 00:22:02.823 "num_base_bdevs_operational": 2, 00:22:02.823 "base_bdevs_list": [ 00:22:02.823 { 00:22:02.823 "name": "BaseBdev1", 00:22:02.823 "uuid": "4b91cad9-f741-4230-8f1c-d4f081f89471", 00:22:02.823 "is_configured": true, 00:22:02.823 "data_offset": 256, 00:22:02.823 "data_size": 7936 00:22:02.823 }, 00:22:02.823 { 00:22:02.823 "name": "BaseBdev2", 00:22:02.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.823 "is_configured": false, 00:22:02.823 "data_offset": 0, 00:22:02.823 "data_size": 0 00:22:02.823 } 00:22:02.823 ] 00:22:02.824 }' 00:22:02.824 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.824 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.389 [2024-11-27 08:53:59.874015] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:03.389 [2024-11-27 08:53:59.874085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.389 [2024-11-27 08:53:59.882053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.389 [2024-11-27 08:53:59.884619] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:03.389 [2024-11-27 08:53:59.884819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.389 "name": "Existed_Raid", 00:22:03.389 "uuid": "0976abb3-1234-458a-9fec-788f97a318e5", 00:22:03.389 "strip_size_kb": 0, 00:22:03.389 "state": "configuring", 00:22:03.389 "raid_level": "raid1", 00:22:03.389 "superblock": true, 00:22:03.389 "num_base_bdevs": 2, 00:22:03.389 "num_base_bdevs_discovered": 1, 00:22:03.389 "num_base_bdevs_operational": 2, 00:22:03.389 "base_bdevs_list": [ 00:22:03.389 { 00:22:03.389 "name": "BaseBdev1", 00:22:03.389 "uuid": "4b91cad9-f741-4230-8f1c-d4f081f89471", 00:22:03.389 "is_configured": true, 00:22:03.389 "data_offset": 256, 00:22:03.389 "data_size": 7936 00:22:03.389 }, 00:22:03.389 { 00:22:03.389 "name": "BaseBdev2", 00:22:03.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.389 "is_configured": false, 00:22:03.389 "data_offset": 0, 00:22:03.389 "data_size": 0 00:22:03.389 } 00:22:03.389 ] 00:22:03.389 }' 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.389 08:53:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.647 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:22:03.647 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.647 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.905 [2024-11-27 08:54:00.412574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:03.905 [2024-11-27 08:54:00.412887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:03.905 [2024-11-27 08:54:00.412907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:03.905 [2024-11-27 08:54:00.413017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:03.905 [2024-11-27 08:54:00.413124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:03.905 [2024-11-27 08:54:00.413145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:03.905 [2024-11-27 08:54:00.413235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.905 BaseBdev2 00:22:03.905 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.905 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:03.905 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_name=BaseBdev2 00:22:03.905 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:22:03.905 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local i 00:22:03.905 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:22:03.905 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:22:03.905 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # rpc_cmd bdev_wait_for_examine 00:22:03.905 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.905 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.905 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.905 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:03.905 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.905 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.905 [ 00:22:03.905 { 00:22:03.905 "name": "BaseBdev2", 00:22:03.905 "aliases": [ 00:22:03.905 "35dddc74-1df3-4fa9-a572-6e383e21ffa2" 00:22:03.905 ], 00:22:03.905 "product_name": "Malloc disk", 00:22:03.905 "block_size": 4128, 00:22:03.905 "num_blocks": 8192, 00:22:03.905 "uuid": "35dddc74-1df3-4fa9-a572-6e383e21ffa2", 00:22:03.905 "md_size": 32, 00:22:03.905 "md_interleave": true, 00:22:03.905 "dif_type": 0, 00:22:03.905 "assigned_rate_limits": { 00:22:03.905 "rw_ios_per_sec": 0, 00:22:03.905 "rw_mbytes_per_sec": 0, 00:22:03.905 "r_mbytes_per_sec": 0, 00:22:03.905 "w_mbytes_per_sec": 0 00:22:03.905 }, 00:22:03.905 "claimed": true, 00:22:03.905 "claim_type": "exclusive_write", 00:22:03.905 "zoned": false, 00:22:03.905 "supported_io_types": { 00:22:03.905 "read": true, 00:22:03.905 "write": true, 00:22:03.905 "unmap": true, 00:22:03.905 "flush": true, 00:22:03.905 "reset": true, 00:22:03.905 "nvme_admin": false, 00:22:03.905 "nvme_io": false, 00:22:03.905 "nvme_io_md": false, 00:22:03.905 "write_zeroes": true, 00:22:03.905 "zcopy": true, 00:22:03.905 "get_zone_info": false, 00:22:03.905 "zone_management": false, 00:22:03.905 "zone_append": false, 00:22:03.905 "compare": false, 00:22:03.905 "compare_and_write": false, 00:22:03.905 "abort": true, 00:22:03.905 "seek_hole": false, 00:22:03.905 "seek_data": false, 00:22:03.905 "copy": true, 00:22:03.905 "nvme_iov_md": false 00:22:03.905 }, 00:22:03.905 "memory_domains": [ 00:22:03.905 { 00:22:03.905 "dma_device_id": "system", 00:22:03.905 "dma_device_type": 1 00:22:03.905 }, 00:22:03.906 { 00:22:03.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.906 "dma_device_type": 2 00:22:03.906 } 00:22:03.906 ], 00:22:03.906 "driver_specific": {} 00:22:03.906 } 00:22:03.906 ] 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # return 0 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.906 "name": "Existed_Raid", 00:22:03.906 "uuid": "0976abb3-1234-458a-9fec-788f97a318e5", 00:22:03.906 "strip_size_kb": 0, 00:22:03.906 "state": "online", 00:22:03.906 "raid_level": "raid1", 00:22:03.906 "superblock": true, 00:22:03.906 "num_base_bdevs": 2, 00:22:03.906 "num_base_bdevs_discovered": 2, 00:22:03.906 "num_base_bdevs_operational": 2, 00:22:03.906 "base_bdevs_list": [ 00:22:03.906 { 00:22:03.906 "name": "BaseBdev1", 00:22:03.906 "uuid": "4b91cad9-f741-4230-8f1c-d4f081f89471", 00:22:03.906 "is_configured": true, 00:22:03.906 "data_offset": 256, 00:22:03.906 "data_size": 7936 00:22:03.906 }, 00:22:03.906 { 00:22:03.906 "name": "BaseBdev2", 00:22:03.906 "uuid": "35dddc74-1df3-4fa9-a572-6e383e21ffa2", 00:22:03.906 "is_configured": true, 00:22:03.906 "data_offset": 256, 00:22:03.906 "data_size": 7936 00:22:03.906 } 00:22:03.906 ] 00:22:03.906 }' 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.906 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.472 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:04.472 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:04.472 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:04.472 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:04.472 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:04.473 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:04.473 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:04.473 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.473 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.473 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:04.473 [2024-11-27 08:54:00.949215] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:04.473 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.473 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:04.473 "name": "Existed_Raid", 00:22:04.473 "aliases": [ 00:22:04.473 "0976abb3-1234-458a-9fec-788f97a318e5" 00:22:04.473 ], 00:22:04.473 "product_name": "Raid Volume", 00:22:04.473 "block_size": 4128, 00:22:04.473 "num_blocks": 7936, 00:22:04.473 "uuid": "0976abb3-1234-458a-9fec-788f97a318e5", 00:22:04.473 "md_size": 32, 00:22:04.473 "md_interleave": true, 00:22:04.473 "dif_type": 0, 00:22:04.473 "assigned_rate_limits": { 00:22:04.473 "rw_ios_per_sec": 0, 00:22:04.473 "rw_mbytes_per_sec": 0, 00:22:04.473 "r_mbytes_per_sec": 0, 00:22:04.473 "w_mbytes_per_sec": 0 00:22:04.473 }, 00:22:04.473 "claimed": false, 00:22:04.473 "zoned": false, 00:22:04.473 "supported_io_types": { 00:22:04.473 "read": true, 00:22:04.473 "write": true, 00:22:04.473 "unmap": false, 00:22:04.473 "flush": false, 00:22:04.473 "reset": true, 00:22:04.473 "nvme_admin": false, 00:22:04.473 "nvme_io": false, 00:22:04.473 "nvme_io_md": false, 00:22:04.473 "write_zeroes": true, 00:22:04.473 "zcopy": false, 00:22:04.473 "get_zone_info": false, 00:22:04.473 "zone_management": false, 00:22:04.473 "zone_append": false, 00:22:04.473 "compare": false, 00:22:04.473 "compare_and_write": false, 00:22:04.473 "abort": false, 00:22:04.473 "seek_hole": false, 00:22:04.473 "seek_data": false, 00:22:04.473 "copy": false, 00:22:04.473 "nvme_iov_md": false 00:22:04.473 }, 00:22:04.473 "memory_domains": [ 00:22:04.473 { 00:22:04.473 "dma_device_id": "system", 00:22:04.473 "dma_device_type": 1 00:22:04.473 }, 00:22:04.473 { 00:22:04.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.473 "dma_device_type": 2 00:22:04.473 }, 00:22:04.473 { 00:22:04.473 "dma_device_id": "system", 00:22:04.473 "dma_device_type": 1 00:22:04.473 }, 00:22:04.473 { 00:22:04.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.473 "dma_device_type": 2 00:22:04.473 } 00:22:04.473 ], 00:22:04.473 "driver_specific": { 00:22:04.473 "raid": { 00:22:04.473 "uuid": "0976abb3-1234-458a-9fec-788f97a318e5", 00:22:04.473 "strip_size_kb": 0, 00:22:04.473 "state": "online", 00:22:04.473 "raid_level": "raid1", 00:22:04.473 "superblock": true, 00:22:04.473 "num_base_bdevs": 2, 00:22:04.473 "num_base_bdevs_discovered": 2, 00:22:04.473 "num_base_bdevs_operational": 2, 00:22:04.473 "base_bdevs_list": [ 00:22:04.473 { 00:22:04.473 "name": "BaseBdev1", 00:22:04.473 "uuid": "4b91cad9-f741-4230-8f1c-d4f081f89471", 00:22:04.473 "is_configured": true, 00:22:04.473 "data_offset": 256, 00:22:04.473 "data_size": 7936 00:22:04.473 }, 00:22:04.473 { 00:22:04.473 "name": "BaseBdev2", 00:22:04.473 "uuid": "35dddc74-1df3-4fa9-a572-6e383e21ffa2", 00:22:04.473 "is_configured": true, 00:22:04.473 "data_offset": 256, 00:22:04.473 "data_size": 7936 00:22:04.473 } 00:22:04.473 ] 00:22:04.473 } 00:22:04.473 } 00:22:04.473 }' 00:22:04.473 08:54:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:04.473 BaseBdev2' 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.473 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:04.474 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:04.474 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:04.474 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.474 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.474 [2024-11-27 08:54:01.208918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:04.731 "name": "Existed_Raid", 00:22:04.731 "uuid": "0976abb3-1234-458a-9fec-788f97a318e5", 00:22:04.731 "strip_size_kb": 0, 00:22:04.731 "state": "online", 00:22:04.731 "raid_level": "raid1", 00:22:04.731 "superblock": true, 00:22:04.731 "num_base_bdevs": 2, 00:22:04.731 "num_base_bdevs_discovered": 1, 00:22:04.731 "num_base_bdevs_operational": 1, 00:22:04.731 "base_bdevs_list": [ 00:22:04.731 { 00:22:04.731 "name": null, 00:22:04.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.731 "is_configured": false, 00:22:04.731 "data_offset": 0, 00:22:04.731 "data_size": 7936 00:22:04.731 }, 00:22:04.731 { 00:22:04.731 "name": "BaseBdev2", 00:22:04.731 "uuid": "35dddc74-1df3-4fa9-a572-6e383e21ffa2", 00:22:04.731 "is_configured": true, 00:22:04.731 "data_offset": 256, 00:22:04.731 "data_size": 7936 00:22:04.731 } 00:22:04.731 ] 00:22:04.731 }' 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:04.731 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.296 [2024-11-27 08:54:01.829914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:05.296 [2024-11-27 08:54:01.830085] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:05.296 [2024-11-27 08:54:01.920827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:05.296 [2024-11-27 08:54:01.920902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:05.296 [2024-11-27 08:54:01.920923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89057 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' -z 89057 ']' 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # kill -0 89057 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # uname 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:22:05.296 08:54:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 89057 00:22:05.296 killing process with pid 89057 00:22:05.296 08:54:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:22:05.296 08:54:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:22:05.296 08:54:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # echo 'killing process with pid 89057' 00:22:05.296 08:54:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # kill 89057 00:22:05.296 [2024-11-27 08:54:02.008197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:05.296 08:54:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@975 -- # wait 89057 00:22:05.296 [2024-11-27 08:54:02.023791] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:06.671 ************************************ 00:22:06.671 END TEST raid_state_function_test_sb_md_interleaved 00:22:06.671 ************************************ 00:22:06.671 08:54:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:22:06.671 00:22:06.671 real 0m5.455s 00:22:06.671 user 0m8.132s 00:22:06.671 sys 0m0.807s 00:22:06.671 08:54:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # xtrace_disable 00:22:06.671 08:54:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:06.671 08:54:03 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:22:06.671 08:54:03 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:22:06.671 08:54:03 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:22:06.671 08:54:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:06.671 ************************************ 00:22:06.671 START TEST raid_superblock_test_md_interleaved 00:22:06.671 ************************************ 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # raid_superblock_test raid1 2 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89304 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89304 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@832 -- # '[' -z 89304 ']' 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local max_retries=100 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@841 -- # xtrace_disable 00:22:06.671 08:54:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:06.671 [2024-11-27 08:54:03.281980] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:22:06.671 [2024-11-27 08:54:03.282152] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89304 ] 00:22:06.930 [2024-11-27 08:54:03.460049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.930 [2024-11-27 08:54:03.605497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.189 [2024-11-27 08:54:03.829894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:07.189 [2024-11-27 08:54:03.829983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@865 -- # return 0 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.755 malloc1 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.755 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.756 [2024-11-27 08:54:04.355883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:07.756 [2024-11-27 08:54:04.356122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.756 [2024-11-27 08:54:04.356288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:07.756 [2024-11-27 08:54:04.356431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.756 [2024-11-27 08:54:04.359273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.756 [2024-11-27 08:54:04.359458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:07.756 pt1 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.756 malloc2 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.756 [2024-11-27 08:54:04.415534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:07.756 [2024-11-27 08:54:04.415615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.756 [2024-11-27 08:54:04.415649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:07.756 [2024-11-27 08:54:04.415663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.756 [2024-11-27 08:54:04.418211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.756 [2024-11-27 08:54:04.418428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:07.756 pt2 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.756 [2024-11-27 08:54:04.423553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:07.756 [2024-11-27 08:54:04.426281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:07.756 [2024-11-27 08:54:04.426728] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:07.756 [2024-11-27 08:54:04.426896] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:07.756 [2024-11-27 08:54:04.427054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:07.756 [2024-11-27 08:54:04.427282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:07.756 [2024-11-27 08:54:04.427421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:07.756 [2024-11-27 08:54:04.427720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.756 "name": "raid_bdev1", 00:22:07.756 "uuid": "264a7ee8-4f32-4e07-b98c-d6a4dc212745", 00:22:07.756 "strip_size_kb": 0, 00:22:07.756 "state": "online", 00:22:07.756 "raid_level": "raid1", 00:22:07.756 "superblock": true, 00:22:07.756 "num_base_bdevs": 2, 00:22:07.756 "num_base_bdevs_discovered": 2, 00:22:07.756 "num_base_bdevs_operational": 2, 00:22:07.756 "base_bdevs_list": [ 00:22:07.756 { 00:22:07.756 "name": "pt1", 00:22:07.756 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:07.756 "is_configured": true, 00:22:07.756 "data_offset": 256, 00:22:07.756 "data_size": 7936 00:22:07.756 }, 00:22:07.756 { 00:22:07.756 "name": "pt2", 00:22:07.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:07.756 "is_configured": true, 00:22:07.756 "data_offset": 256, 00:22:07.756 "data_size": 7936 00:22:07.756 } 00:22:07.756 ] 00:22:07.756 }' 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.756 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.322 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:08.322 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:08.322 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:08.322 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:08.322 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:08.322 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:08.322 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:08.322 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.322 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.322 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:08.322 [2024-11-27 08:54:04.960296] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:08.322 08:54:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.322 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:08.322 "name": "raid_bdev1", 00:22:08.322 "aliases": [ 00:22:08.322 "264a7ee8-4f32-4e07-b98c-d6a4dc212745" 00:22:08.322 ], 00:22:08.322 "product_name": "Raid Volume", 00:22:08.322 "block_size": 4128, 00:22:08.322 "num_blocks": 7936, 00:22:08.322 "uuid": "264a7ee8-4f32-4e07-b98c-d6a4dc212745", 00:22:08.322 "md_size": 32, 00:22:08.322 "md_interleave": true, 00:22:08.322 "dif_type": 0, 00:22:08.322 "assigned_rate_limits": { 00:22:08.322 "rw_ios_per_sec": 0, 00:22:08.322 "rw_mbytes_per_sec": 0, 00:22:08.322 "r_mbytes_per_sec": 0, 00:22:08.322 "w_mbytes_per_sec": 0 00:22:08.322 }, 00:22:08.322 "claimed": false, 00:22:08.322 "zoned": false, 00:22:08.322 "supported_io_types": { 00:22:08.322 "read": true, 00:22:08.322 "write": true, 00:22:08.322 "unmap": false, 00:22:08.322 "flush": false, 00:22:08.322 "reset": true, 00:22:08.322 "nvme_admin": false, 00:22:08.322 "nvme_io": false, 00:22:08.322 "nvme_io_md": false, 00:22:08.322 "write_zeroes": true, 00:22:08.322 "zcopy": false, 00:22:08.322 "get_zone_info": false, 00:22:08.322 "zone_management": false, 00:22:08.322 "zone_append": false, 00:22:08.322 "compare": false, 00:22:08.322 "compare_and_write": false, 00:22:08.322 "abort": false, 00:22:08.322 "seek_hole": false, 00:22:08.322 "seek_data": false, 00:22:08.322 "copy": false, 00:22:08.322 "nvme_iov_md": false 00:22:08.322 }, 00:22:08.322 "memory_domains": [ 00:22:08.322 { 00:22:08.322 "dma_device_id": "system", 00:22:08.322 "dma_device_type": 1 00:22:08.322 }, 00:22:08.322 { 00:22:08.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.322 "dma_device_type": 2 00:22:08.322 }, 00:22:08.322 { 00:22:08.322 "dma_device_id": "system", 00:22:08.322 "dma_device_type": 1 00:22:08.322 }, 00:22:08.322 { 00:22:08.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.322 "dma_device_type": 2 00:22:08.322 } 00:22:08.322 ], 00:22:08.322 "driver_specific": { 00:22:08.322 "raid": { 00:22:08.322 "uuid": "264a7ee8-4f32-4e07-b98c-d6a4dc212745", 00:22:08.322 "strip_size_kb": 0, 00:22:08.322 "state": "online", 00:22:08.322 "raid_level": "raid1", 00:22:08.322 "superblock": true, 00:22:08.322 "num_base_bdevs": 2, 00:22:08.322 "num_base_bdevs_discovered": 2, 00:22:08.322 "num_base_bdevs_operational": 2, 00:22:08.322 "base_bdevs_list": [ 00:22:08.322 { 00:22:08.322 "name": "pt1", 00:22:08.322 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:08.322 "is_configured": true, 00:22:08.322 "data_offset": 256, 00:22:08.322 "data_size": 7936 00:22:08.322 }, 00:22:08.322 { 00:22:08.322 "name": "pt2", 00:22:08.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:08.322 "is_configured": true, 00:22:08.322 "data_offset": 256, 00:22:08.322 "data_size": 7936 00:22:08.322 } 00:22:08.322 ] 00:22:08.322 } 00:22:08.322 } 00:22:08.322 }' 00:22:08.322 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:08.322 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:08.322 pt2' 00:22:08.322 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:08.581 [2024-11-27 08:54:05.236272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=264a7ee8-4f32-4e07-b98c-d6a4dc212745 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 264a7ee8-4f32-4e07-b98c-d6a4dc212745 ']' 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.581 [2024-11-27 08:54:05.291955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:08.581 [2024-11-27 08:54:05.292000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:08.581 [2024-11-27 08:54:05.292122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:08.581 [2024-11-27 08:54:05.292207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:08.581 [2024-11-27 08:54:05.292227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.581 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.841 [2024-11-27 08:54:05.432034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:08.841 [2024-11-27 08:54:05.434863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:08.841 [2024-11-27 08:54:05.435133] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:08.841 [2024-11-27 08:54:05.435227] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:08.841 [2024-11-27 08:54:05.435256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:08.841 [2024-11-27 08:54:05.435272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:08.841 request: 00:22:08.841 { 00:22:08.841 "name": "raid_bdev1", 00:22:08.841 "raid_level": "raid1", 00:22:08.841 "base_bdevs": [ 00:22:08.841 "malloc1", 00:22:08.841 "malloc2" 00:22:08.841 ], 00:22:08.841 "superblock": false, 00:22:08.841 "method": "bdev_raid_create", 00:22:08.841 "req_id": 1 00:22:08.841 } 00:22:08.841 Got JSON-RPC error response 00:22:08.841 response: 00:22:08.841 { 00:22:08.841 "code": -17, 00:22:08.841 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:08.841 } 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.841 [2024-11-27 08:54:05.488094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:08.841 [2024-11-27 08:54:05.488290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.841 [2024-11-27 08:54:05.488445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:08.841 [2024-11-27 08:54:05.488558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.841 [2024-11-27 08:54:05.491371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.841 [2024-11-27 08:54:05.491535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:08.841 [2024-11-27 08:54:05.491705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:08.841 [2024-11-27 08:54:05.491830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:08.841 pt1 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.841 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.841 "name": "raid_bdev1", 00:22:08.841 "uuid": "264a7ee8-4f32-4e07-b98c-d6a4dc212745", 00:22:08.841 "strip_size_kb": 0, 00:22:08.841 "state": "configuring", 00:22:08.841 "raid_level": "raid1", 00:22:08.841 "superblock": true, 00:22:08.841 "num_base_bdevs": 2, 00:22:08.841 "num_base_bdevs_discovered": 1, 00:22:08.841 "num_base_bdevs_operational": 2, 00:22:08.841 "base_bdevs_list": [ 00:22:08.841 { 00:22:08.841 "name": "pt1", 00:22:08.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:08.841 "is_configured": true, 00:22:08.841 "data_offset": 256, 00:22:08.842 "data_size": 7936 00:22:08.842 }, 00:22:08.842 { 00:22:08.842 "name": null, 00:22:08.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:08.842 "is_configured": false, 00:22:08.842 "data_offset": 256, 00:22:08.842 "data_size": 7936 00:22:08.842 } 00:22:08.842 ] 00:22:08.842 }' 00:22:08.842 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.842 08:54:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.409 [2024-11-27 08:54:06.016281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:09.409 [2024-11-27 08:54:06.016408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:09.409 [2024-11-27 08:54:06.016445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:09.409 [2024-11-27 08:54:06.016464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:09.409 [2024-11-27 08:54:06.016714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:09.409 [2024-11-27 08:54:06.016741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:09.409 [2024-11-27 08:54:06.016817] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:09.409 [2024-11-27 08:54:06.016860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:09.409 [2024-11-27 08:54:06.017003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:09.409 [2024-11-27 08:54:06.017030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:09.409 [2024-11-27 08:54:06.017122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:09.409 [2024-11-27 08:54:06.017224] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:09.409 [2024-11-27 08:54:06.017239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:09.409 [2024-11-27 08:54:06.017351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.409 pt2 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.409 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.410 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.410 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.410 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.410 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.410 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.410 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.410 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.410 "name": "raid_bdev1", 00:22:09.410 "uuid": "264a7ee8-4f32-4e07-b98c-d6a4dc212745", 00:22:09.410 "strip_size_kb": 0, 00:22:09.410 "state": "online", 00:22:09.410 "raid_level": "raid1", 00:22:09.410 "superblock": true, 00:22:09.410 "num_base_bdevs": 2, 00:22:09.410 "num_base_bdevs_discovered": 2, 00:22:09.410 "num_base_bdevs_operational": 2, 00:22:09.410 "base_bdevs_list": [ 00:22:09.410 { 00:22:09.410 "name": "pt1", 00:22:09.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:09.410 "is_configured": true, 00:22:09.410 "data_offset": 256, 00:22:09.410 "data_size": 7936 00:22:09.410 }, 00:22:09.410 { 00:22:09.410 "name": "pt2", 00:22:09.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:09.410 "is_configured": true, 00:22:09.410 "data_offset": 256, 00:22:09.410 "data_size": 7936 00:22:09.410 } 00:22:09.410 ] 00:22:09.410 }' 00:22:09.410 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.410 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.977 [2024-11-27 08:54:06.552803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:09.977 "name": "raid_bdev1", 00:22:09.977 "aliases": [ 00:22:09.977 "264a7ee8-4f32-4e07-b98c-d6a4dc212745" 00:22:09.977 ], 00:22:09.977 "product_name": "Raid Volume", 00:22:09.977 "block_size": 4128, 00:22:09.977 "num_blocks": 7936, 00:22:09.977 "uuid": "264a7ee8-4f32-4e07-b98c-d6a4dc212745", 00:22:09.977 "md_size": 32, 00:22:09.977 "md_interleave": true, 00:22:09.977 "dif_type": 0, 00:22:09.977 "assigned_rate_limits": { 00:22:09.977 "rw_ios_per_sec": 0, 00:22:09.977 "rw_mbytes_per_sec": 0, 00:22:09.977 "r_mbytes_per_sec": 0, 00:22:09.977 "w_mbytes_per_sec": 0 00:22:09.977 }, 00:22:09.977 "claimed": false, 00:22:09.977 "zoned": false, 00:22:09.977 "supported_io_types": { 00:22:09.977 "read": true, 00:22:09.977 "write": true, 00:22:09.977 "unmap": false, 00:22:09.977 "flush": false, 00:22:09.977 "reset": true, 00:22:09.977 "nvme_admin": false, 00:22:09.977 "nvme_io": false, 00:22:09.977 "nvme_io_md": false, 00:22:09.977 "write_zeroes": true, 00:22:09.977 "zcopy": false, 00:22:09.977 "get_zone_info": false, 00:22:09.977 "zone_management": false, 00:22:09.977 "zone_append": false, 00:22:09.977 "compare": false, 00:22:09.977 "compare_and_write": false, 00:22:09.977 "abort": false, 00:22:09.977 "seek_hole": false, 00:22:09.977 "seek_data": false, 00:22:09.977 "copy": false, 00:22:09.977 "nvme_iov_md": false 00:22:09.977 }, 00:22:09.977 "memory_domains": [ 00:22:09.977 { 00:22:09.977 "dma_device_id": "system", 00:22:09.977 "dma_device_type": 1 00:22:09.977 }, 00:22:09.977 { 00:22:09.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.977 "dma_device_type": 2 00:22:09.977 }, 00:22:09.977 { 00:22:09.977 "dma_device_id": "system", 00:22:09.977 "dma_device_type": 1 00:22:09.977 }, 00:22:09.977 { 00:22:09.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.977 "dma_device_type": 2 00:22:09.977 } 00:22:09.977 ], 00:22:09.977 "driver_specific": { 00:22:09.977 "raid": { 00:22:09.977 "uuid": "264a7ee8-4f32-4e07-b98c-d6a4dc212745", 00:22:09.977 "strip_size_kb": 0, 00:22:09.977 "state": "online", 00:22:09.977 "raid_level": "raid1", 00:22:09.977 "superblock": true, 00:22:09.977 "num_base_bdevs": 2, 00:22:09.977 "num_base_bdevs_discovered": 2, 00:22:09.977 "num_base_bdevs_operational": 2, 00:22:09.977 "base_bdevs_list": [ 00:22:09.977 { 00:22:09.977 "name": "pt1", 00:22:09.977 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:09.977 "is_configured": true, 00:22:09.977 "data_offset": 256, 00:22:09.977 "data_size": 7936 00:22:09.977 }, 00:22:09.977 { 00:22:09.977 "name": "pt2", 00:22:09.977 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:09.977 "is_configured": true, 00:22:09.977 "data_offset": 256, 00:22:09.977 "data_size": 7936 00:22:09.977 } 00:22:09.977 ] 00:22:09.977 } 00:22:09.977 } 00:22:09.977 }' 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:09.977 pt2' 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.977 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.240 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:10.240 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:10.240 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.241 [2024-11-27 08:54:06.836801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 264a7ee8-4f32-4e07-b98c-d6a4dc212745 '!=' 264a7ee8-4f32-4e07-b98c-d6a4dc212745 ']' 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.241 [2024-11-27 08:54:06.884612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.241 "name": "raid_bdev1", 00:22:10.241 "uuid": "264a7ee8-4f32-4e07-b98c-d6a4dc212745", 00:22:10.241 "strip_size_kb": 0, 00:22:10.241 "state": "online", 00:22:10.241 "raid_level": "raid1", 00:22:10.241 "superblock": true, 00:22:10.241 "num_base_bdevs": 2, 00:22:10.241 "num_base_bdevs_discovered": 1, 00:22:10.241 "num_base_bdevs_operational": 1, 00:22:10.241 "base_bdevs_list": [ 00:22:10.241 { 00:22:10.241 "name": null, 00:22:10.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.241 "is_configured": false, 00:22:10.241 "data_offset": 0, 00:22:10.241 "data_size": 7936 00:22:10.241 }, 00:22:10.241 { 00:22:10.241 "name": "pt2", 00:22:10.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:10.241 "is_configured": true, 00:22:10.241 "data_offset": 256, 00:22:10.241 "data_size": 7936 00:22:10.241 } 00:22:10.241 ] 00:22:10.241 }' 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.241 08:54:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.831 [2024-11-27 08:54:07.400648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:10.831 [2024-11-27 08:54:07.400886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:10.831 [2024-11-27 08:54:07.401021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:10.831 [2024-11-27 08:54:07.401098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:10.831 [2024-11-27 08:54:07.401120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.831 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.831 [2024-11-27 08:54:07.472645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:10.831 [2024-11-27 08:54:07.472716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.831 [2024-11-27 08:54:07.472743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:10.832 [2024-11-27 08:54:07.472760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.832 [2024-11-27 08:54:07.475597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.832 [2024-11-27 08:54:07.475646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:10.832 [2024-11-27 08:54:07.475726] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:10.832 [2024-11-27 08:54:07.475795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:10.832 [2024-11-27 08:54:07.475900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:10.832 [2024-11-27 08:54:07.475922] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:10.832 [2024-11-27 08:54:07.476035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:10.832 [2024-11-27 08:54:07.476128] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:10.832 [2024-11-27 08:54:07.476142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:10.832 [2024-11-27 08:54:07.476241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.832 pt2 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.832 "name": "raid_bdev1", 00:22:10.832 "uuid": "264a7ee8-4f32-4e07-b98c-d6a4dc212745", 00:22:10.832 "strip_size_kb": 0, 00:22:10.832 "state": "online", 00:22:10.832 "raid_level": "raid1", 00:22:10.832 "superblock": true, 00:22:10.832 "num_base_bdevs": 2, 00:22:10.832 "num_base_bdevs_discovered": 1, 00:22:10.832 "num_base_bdevs_operational": 1, 00:22:10.832 "base_bdevs_list": [ 00:22:10.832 { 00:22:10.832 "name": null, 00:22:10.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.832 "is_configured": false, 00:22:10.832 "data_offset": 256, 00:22:10.832 "data_size": 7936 00:22:10.832 }, 00:22:10.832 { 00:22:10.832 "name": "pt2", 00:22:10.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:10.832 "is_configured": true, 00:22:10.832 "data_offset": 256, 00:22:10.832 "data_size": 7936 00:22:10.832 } 00:22:10.832 ] 00:22:10.832 }' 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.832 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.398 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:11.398 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.398 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.398 [2024-11-27 08:54:07.972779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:11.398 [2024-11-27 08:54:07.972821] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:11.398 [2024-11-27 08:54:07.972936] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:11.398 [2024-11-27 08:54:07.973020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:11.398 [2024-11-27 08:54:07.973037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:11.398 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.398 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.398 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:11.398 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.398 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.398 08:54:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.398 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:11.398 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.399 [2024-11-27 08:54:08.036827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:11.399 [2024-11-27 08:54:08.036924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.399 [2024-11-27 08:54:08.036958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:11.399 [2024-11-27 08:54:08.036973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.399 [2024-11-27 08:54:08.039814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.399 [2024-11-27 08:54:08.039873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:11.399 [2024-11-27 08:54:08.039955] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:11.399 [2024-11-27 08:54:08.040020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:11.399 [2024-11-27 08:54:08.040156] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:11.399 [2024-11-27 08:54:08.040175] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:11.399 [2024-11-27 08:54:08.040202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:11.399 [2024-11-27 08:54:08.040275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:11.399 [2024-11-27 08:54:08.040409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:11.399 [2024-11-27 08:54:08.040425] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:11.399 [2024-11-27 08:54:08.040511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:11.399 [2024-11-27 08:54:08.040607] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:11.399 [2024-11-27 08:54:08.040636] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:11.399 [2024-11-27 08:54:08.040785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:11.399 pt1 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.399 "name": "raid_bdev1", 00:22:11.399 "uuid": "264a7ee8-4f32-4e07-b98c-d6a4dc212745", 00:22:11.399 "strip_size_kb": 0, 00:22:11.399 "state": "online", 00:22:11.399 "raid_level": "raid1", 00:22:11.399 "superblock": true, 00:22:11.399 "num_base_bdevs": 2, 00:22:11.399 "num_base_bdevs_discovered": 1, 00:22:11.399 "num_base_bdevs_operational": 1, 00:22:11.399 "base_bdevs_list": [ 00:22:11.399 { 00:22:11.399 "name": null, 00:22:11.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.399 "is_configured": false, 00:22:11.399 "data_offset": 256, 00:22:11.399 "data_size": 7936 00:22:11.399 }, 00:22:11.399 { 00:22:11.399 "name": "pt2", 00:22:11.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:11.399 "is_configured": true, 00:22:11.399 "data_offset": 256, 00:22:11.399 "data_size": 7936 00:22:11.399 } 00:22:11.399 ] 00:22:11.399 }' 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.399 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.965 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:11.965 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.965 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:11.965 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.965 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.965 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:11.965 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:11.965 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.965 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:11.966 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.966 [2024-11-27 08:54:08.629256] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:11.966 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.966 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 264a7ee8-4f32-4e07-b98c-d6a4dc212745 '!=' 264a7ee8-4f32-4e07-b98c-d6a4dc212745 ']' 00:22:11.966 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89304 00:22:11.966 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # '[' -z 89304 ']' 00:22:11.966 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # kill -0 89304 00:22:11.966 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # uname 00:22:11.966 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:22:11.966 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 89304 00:22:11.966 killing process with pid 89304 00:22:11.966 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:22:11.966 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:22:11.966 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # echo 'killing process with pid 89304' 00:22:11.966 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # kill 89304 00:22:11.966 [2024-11-27 08:54:08.716877] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:11.966 08:54:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@975 -- # wait 89304 00:22:11.966 [2024-11-27 08:54:08.717026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:11.966 [2024-11-27 08:54:08.717103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:11.966 [2024-11-27 08:54:08.717127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:12.225 [2024-11-27 08:54:08.910427] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:13.601 08:54:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:22:13.601 00:22:13.601 real 0m6.831s 00:22:13.601 user 0m10.743s 00:22:13.601 sys 0m1.026s 00:22:13.601 08:54:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # xtrace_disable 00:22:13.601 08:54:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:13.601 ************************************ 00:22:13.601 END TEST raid_superblock_test_md_interleaved 00:22:13.601 ************************************ 00:22:13.601 08:54:10 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:22:13.601 08:54:10 bdev_raid -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:22:13.601 08:54:10 bdev_raid -- common/autotest_common.sh@1108 -- # xtrace_disable 00:22:13.601 08:54:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:13.601 ************************************ 00:22:13.601 START TEST raid_rebuild_test_sb_md_interleaved 00:22:13.601 ************************************ 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # raid_rebuild_test raid1 2 true false false 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:13.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89638 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89638 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # '[' -z 89638 ']' 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local max_retries=100 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.601 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:13.602 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@841 -- # xtrace_disable 00:22:13.602 08:54:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:13.602 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:13.602 Zero copy mechanism will not be used. 00:22:13.602 [2024-11-27 08:54:10.190354] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:22:13.602 [2024-11-27 08:54:10.190561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89638 ] 00:22:13.861 [2024-11-27 08:54:10.377542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.861 [2024-11-27 08:54:10.525037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.120 [2024-11-27 08:54:10.756537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:14.120 [2024-11-27 08:54:10.756597] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:14.688 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:22:14.688 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@865 -- # return 0 00:22:14.688 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.689 BaseBdev1_malloc 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.689 [2024-11-27 08:54:11.270292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:14.689 [2024-11-27 08:54:11.270570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.689 [2024-11-27 08:54:11.270620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:14.689 [2024-11-27 08:54:11.270641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.689 [2024-11-27 08:54:11.273295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.689 [2024-11-27 08:54:11.273372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:14.689 BaseBdev1 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.689 BaseBdev2_malloc 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.689 [2024-11-27 08:54:11.326998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:14.689 [2024-11-27 08:54:11.327091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.689 [2024-11-27 08:54:11.327123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:14.689 [2024-11-27 08:54:11.327144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.689 [2024-11-27 08:54:11.329836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.689 [2024-11-27 08:54:11.330024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:14.689 BaseBdev2 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.689 spare_malloc 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.689 spare_delay 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.689 [2024-11-27 08:54:11.407501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:14.689 [2024-11-27 08:54:11.407734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.689 [2024-11-27 08:54:11.407777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:14.689 [2024-11-27 08:54:11.407798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.689 [2024-11-27 08:54:11.410483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.689 [2024-11-27 08:54:11.410543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:14.689 spare 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.689 [2024-11-27 08:54:11.415547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:14.689 [2024-11-27 08:54:11.418212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:14.689 [2024-11-27 08:54:11.418631] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:14.689 [2024-11-27 08:54:11.418663] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:14.689 [2024-11-27 08:54:11.418776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:14.689 [2024-11-27 08:54:11.418881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:14.689 [2024-11-27 08:54:11.418896] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:14.689 [2024-11-27 08:54:11.419002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.689 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.967 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.967 "name": "raid_bdev1", 00:22:14.967 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:14.967 "strip_size_kb": 0, 00:22:14.967 "state": "online", 00:22:14.967 "raid_level": "raid1", 00:22:14.967 "superblock": true, 00:22:14.967 "num_base_bdevs": 2, 00:22:14.967 "num_base_bdevs_discovered": 2, 00:22:14.967 "num_base_bdevs_operational": 2, 00:22:14.967 "base_bdevs_list": [ 00:22:14.967 { 00:22:14.967 "name": "BaseBdev1", 00:22:14.967 "uuid": "445cdd28-5577-5b83-99bf-31812c4028c9", 00:22:14.967 "is_configured": true, 00:22:14.967 "data_offset": 256, 00:22:14.967 "data_size": 7936 00:22:14.967 }, 00:22:14.967 { 00:22:14.967 "name": "BaseBdev2", 00:22:14.967 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:14.967 "is_configured": true, 00:22:14.967 "data_offset": 256, 00:22:14.967 "data_size": 7936 00:22:14.967 } 00:22:14.967 ] 00:22:14.967 }' 00:22:14.967 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.967 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.234 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:15.234 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:15.234 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.234 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.234 [2024-11-27 08:54:11.948116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.234 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.493 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:15.493 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.493 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.493 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:15.493 08:54:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.493 [2024-11-27 08:54:12.051715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.493 "name": "raid_bdev1", 00:22:15.493 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:15.493 "strip_size_kb": 0, 00:22:15.493 "state": "online", 00:22:15.493 "raid_level": "raid1", 00:22:15.493 "superblock": true, 00:22:15.493 "num_base_bdevs": 2, 00:22:15.493 "num_base_bdevs_discovered": 1, 00:22:15.493 "num_base_bdevs_operational": 1, 00:22:15.493 "base_bdevs_list": [ 00:22:15.493 { 00:22:15.493 "name": null, 00:22:15.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.493 "is_configured": false, 00:22:15.493 "data_offset": 0, 00:22:15.493 "data_size": 7936 00:22:15.493 }, 00:22:15.493 { 00:22:15.493 "name": "BaseBdev2", 00:22:15.493 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:15.493 "is_configured": true, 00:22:15.493 "data_offset": 256, 00:22:15.493 "data_size": 7936 00:22:15.493 } 00:22:15.493 ] 00:22:15.493 }' 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.493 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:16.061 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:16.061 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.061 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:16.061 [2024-11-27 08:54:12.587956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:16.061 [2024-11-27 08:54:12.606828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:16.061 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.061 08:54:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:16.061 [2024-11-27 08:54:12.609963] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:16.998 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:16.998 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:16.998 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:16.998 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:16.998 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:16.998 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.998 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.998 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.998 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:16.998 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.998 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:16.998 "name": "raid_bdev1", 00:22:16.998 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:16.998 "strip_size_kb": 0, 00:22:16.998 "state": "online", 00:22:16.998 "raid_level": "raid1", 00:22:16.998 "superblock": true, 00:22:16.998 "num_base_bdevs": 2, 00:22:16.998 "num_base_bdevs_discovered": 2, 00:22:16.998 "num_base_bdevs_operational": 2, 00:22:16.998 "process": { 00:22:16.998 "type": "rebuild", 00:22:16.998 "target": "spare", 00:22:16.998 "progress": { 00:22:16.998 "blocks": 2560, 00:22:16.998 "percent": 32 00:22:16.998 } 00:22:16.998 }, 00:22:16.998 "base_bdevs_list": [ 00:22:16.998 { 00:22:16.998 "name": "spare", 00:22:16.998 "uuid": "10d8f82e-b0fc-5f96-b376-0b26b7e249ce", 00:22:16.998 "is_configured": true, 00:22:16.998 "data_offset": 256, 00:22:16.998 "data_size": 7936 00:22:16.998 }, 00:22:16.998 { 00:22:16.998 "name": "BaseBdev2", 00:22:16.998 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:16.998 "is_configured": true, 00:22:16.998 "data_offset": 256, 00:22:16.998 "data_size": 7936 00:22:16.998 } 00:22:16.998 ] 00:22:16.998 }' 00:22:16.998 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:16.998 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:16.998 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:17.257 [2024-11-27 08:54:13.783753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:17.257 [2024-11-27 08:54:13.822265] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:17.257 [2024-11-27 08:54:13.822603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.257 [2024-11-27 08:54:13.822908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:17.257 [2024-11-27 08:54:13.822984] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.257 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:17.258 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.258 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.258 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.258 "name": "raid_bdev1", 00:22:17.258 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:17.258 "strip_size_kb": 0, 00:22:17.258 "state": "online", 00:22:17.258 "raid_level": "raid1", 00:22:17.258 "superblock": true, 00:22:17.258 "num_base_bdevs": 2, 00:22:17.258 "num_base_bdevs_discovered": 1, 00:22:17.258 "num_base_bdevs_operational": 1, 00:22:17.258 "base_bdevs_list": [ 00:22:17.258 { 00:22:17.258 "name": null, 00:22:17.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.258 "is_configured": false, 00:22:17.258 "data_offset": 0, 00:22:17.258 "data_size": 7936 00:22:17.258 }, 00:22:17.258 { 00:22:17.258 "name": "BaseBdev2", 00:22:17.258 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:17.258 "is_configured": true, 00:22:17.258 "data_offset": 256, 00:22:17.258 "data_size": 7936 00:22:17.258 } 00:22:17.258 ] 00:22:17.258 }' 00:22:17.258 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.258 08:54:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:17.826 "name": "raid_bdev1", 00:22:17.826 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:17.826 "strip_size_kb": 0, 00:22:17.826 "state": "online", 00:22:17.826 "raid_level": "raid1", 00:22:17.826 "superblock": true, 00:22:17.826 "num_base_bdevs": 2, 00:22:17.826 "num_base_bdevs_discovered": 1, 00:22:17.826 "num_base_bdevs_operational": 1, 00:22:17.826 "base_bdevs_list": [ 00:22:17.826 { 00:22:17.826 "name": null, 00:22:17.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.826 "is_configured": false, 00:22:17.826 "data_offset": 0, 00:22:17.826 "data_size": 7936 00:22:17.826 }, 00:22:17.826 { 00:22:17.826 "name": "BaseBdev2", 00:22:17.826 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:17.826 "is_configured": true, 00:22:17.826 "data_offset": 256, 00:22:17.826 "data_size": 7936 00:22:17.826 } 00:22:17.826 ] 00:22:17.826 }' 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:17.826 [2024-11-27 08:54:14.494221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:17.826 [2024-11-27 08:54:14.511044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.826 08:54:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:17.826 [2024-11-27 08:54:14.514128] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:18.762 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:18.762 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:18.762 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:18.762 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:18.762 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:19.022 "name": "raid_bdev1", 00:22:19.022 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:19.022 "strip_size_kb": 0, 00:22:19.022 "state": "online", 00:22:19.022 "raid_level": "raid1", 00:22:19.022 "superblock": true, 00:22:19.022 "num_base_bdevs": 2, 00:22:19.022 "num_base_bdevs_discovered": 2, 00:22:19.022 "num_base_bdevs_operational": 2, 00:22:19.022 "process": { 00:22:19.022 "type": "rebuild", 00:22:19.022 "target": "spare", 00:22:19.022 "progress": { 00:22:19.022 "blocks": 2304, 00:22:19.022 "percent": 29 00:22:19.022 } 00:22:19.022 }, 00:22:19.022 "base_bdevs_list": [ 00:22:19.022 { 00:22:19.022 "name": "spare", 00:22:19.022 "uuid": "10d8f82e-b0fc-5f96-b376-0b26b7e249ce", 00:22:19.022 "is_configured": true, 00:22:19.022 "data_offset": 256, 00:22:19.022 "data_size": 7936 00:22:19.022 }, 00:22:19.022 { 00:22:19.022 "name": "BaseBdev2", 00:22:19.022 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:19.022 "is_configured": true, 00:22:19.022 "data_offset": 256, 00:22:19.022 "data_size": 7936 00:22:19.022 } 00:22:19.022 ] 00:22:19.022 }' 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:19.022 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=809 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:19.022 "name": "raid_bdev1", 00:22:19.022 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:19.022 "strip_size_kb": 0, 00:22:19.022 "state": "online", 00:22:19.022 "raid_level": "raid1", 00:22:19.022 "superblock": true, 00:22:19.022 "num_base_bdevs": 2, 00:22:19.022 "num_base_bdevs_discovered": 2, 00:22:19.022 "num_base_bdevs_operational": 2, 00:22:19.022 "process": { 00:22:19.022 "type": "rebuild", 00:22:19.022 "target": "spare", 00:22:19.022 "progress": { 00:22:19.022 "blocks": 2816, 00:22:19.022 "percent": 35 00:22:19.022 } 00:22:19.022 }, 00:22:19.022 "base_bdevs_list": [ 00:22:19.022 { 00:22:19.022 "name": "spare", 00:22:19.022 "uuid": "10d8f82e-b0fc-5f96-b376-0b26b7e249ce", 00:22:19.022 "is_configured": true, 00:22:19.022 "data_offset": 256, 00:22:19.022 "data_size": 7936 00:22:19.022 }, 00:22:19.022 { 00:22:19.022 "name": "BaseBdev2", 00:22:19.022 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:19.022 "is_configured": true, 00:22:19.022 "data_offset": 256, 00:22:19.022 "data_size": 7936 00:22:19.022 } 00:22:19.022 ] 00:22:19.022 }' 00:22:19.022 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:19.308 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:19.308 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:19.308 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:19.308 08:54:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:20.267 "name": "raid_bdev1", 00:22:20.267 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:20.267 "strip_size_kb": 0, 00:22:20.267 "state": "online", 00:22:20.267 "raid_level": "raid1", 00:22:20.267 "superblock": true, 00:22:20.267 "num_base_bdevs": 2, 00:22:20.267 "num_base_bdevs_discovered": 2, 00:22:20.267 "num_base_bdevs_operational": 2, 00:22:20.267 "process": { 00:22:20.267 "type": "rebuild", 00:22:20.267 "target": "spare", 00:22:20.267 "progress": { 00:22:20.267 "blocks": 5888, 00:22:20.267 "percent": 74 00:22:20.267 } 00:22:20.267 }, 00:22:20.267 "base_bdevs_list": [ 00:22:20.267 { 00:22:20.267 "name": "spare", 00:22:20.267 "uuid": "10d8f82e-b0fc-5f96-b376-0b26b7e249ce", 00:22:20.267 "is_configured": true, 00:22:20.267 "data_offset": 256, 00:22:20.267 "data_size": 7936 00:22:20.267 }, 00:22:20.267 { 00:22:20.267 "name": "BaseBdev2", 00:22:20.267 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:20.267 "is_configured": true, 00:22:20.267 "data_offset": 256, 00:22:20.267 "data_size": 7936 00:22:20.267 } 00:22:20.267 ] 00:22:20.267 }' 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.267 08:54:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:21.201 [2024-11-27 08:54:17.642405] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:21.201 [2024-11-27 08:54:17.642562] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:21.201 [2024-11-27 08:54:17.642744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.460 08:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:21.460 08:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:21.460 08:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:21.460 08:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:21.460 08:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:21.460 08:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:21.460 08:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.460 08:54:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:21.460 "name": "raid_bdev1", 00:22:21.460 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:21.460 "strip_size_kb": 0, 00:22:21.460 "state": "online", 00:22:21.460 "raid_level": "raid1", 00:22:21.460 "superblock": true, 00:22:21.460 "num_base_bdevs": 2, 00:22:21.460 "num_base_bdevs_discovered": 2, 00:22:21.460 "num_base_bdevs_operational": 2, 00:22:21.460 "base_bdevs_list": [ 00:22:21.460 { 00:22:21.460 "name": "spare", 00:22:21.460 "uuid": "10d8f82e-b0fc-5f96-b376-0b26b7e249ce", 00:22:21.460 "is_configured": true, 00:22:21.460 "data_offset": 256, 00:22:21.460 "data_size": 7936 00:22:21.460 }, 00:22:21.460 { 00:22:21.460 "name": "BaseBdev2", 00:22:21.460 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:21.460 "is_configured": true, 00:22:21.460 "data_offset": 256, 00:22:21.460 "data_size": 7936 00:22:21.460 } 00:22:21.460 ] 00:22:21.460 }' 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.460 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:21.460 "name": "raid_bdev1", 00:22:21.460 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:21.460 "strip_size_kb": 0, 00:22:21.460 "state": "online", 00:22:21.460 "raid_level": "raid1", 00:22:21.460 "superblock": true, 00:22:21.460 "num_base_bdevs": 2, 00:22:21.460 "num_base_bdevs_discovered": 2, 00:22:21.461 "num_base_bdevs_operational": 2, 00:22:21.461 "base_bdevs_list": [ 00:22:21.461 { 00:22:21.461 "name": "spare", 00:22:21.461 "uuid": "10d8f82e-b0fc-5f96-b376-0b26b7e249ce", 00:22:21.461 "is_configured": true, 00:22:21.461 "data_offset": 256, 00:22:21.461 "data_size": 7936 00:22:21.461 }, 00:22:21.461 { 00:22:21.461 "name": "BaseBdev2", 00:22:21.461 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:21.461 "is_configured": true, 00:22:21.461 "data_offset": 256, 00:22:21.461 "data_size": 7936 00:22:21.461 } 00:22:21.461 ] 00:22:21.461 }' 00:22:21.461 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:21.720 "name": "raid_bdev1", 00:22:21.720 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:21.720 "strip_size_kb": 0, 00:22:21.720 "state": "online", 00:22:21.720 "raid_level": "raid1", 00:22:21.720 "superblock": true, 00:22:21.720 "num_base_bdevs": 2, 00:22:21.720 "num_base_bdevs_discovered": 2, 00:22:21.720 "num_base_bdevs_operational": 2, 00:22:21.720 "base_bdevs_list": [ 00:22:21.720 { 00:22:21.720 "name": "spare", 00:22:21.720 "uuid": "10d8f82e-b0fc-5f96-b376-0b26b7e249ce", 00:22:21.720 "is_configured": true, 00:22:21.720 "data_offset": 256, 00:22:21.720 "data_size": 7936 00:22:21.720 }, 00:22:21.720 { 00:22:21.720 "name": "BaseBdev2", 00:22:21.720 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:21.720 "is_configured": true, 00:22:21.720 "data_offset": 256, 00:22:21.720 "data_size": 7936 00:22:21.720 } 00:22:21.720 ] 00:22:21.720 }' 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:21.720 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.288 [2024-11-27 08:54:18.772694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:22.288 [2024-11-27 08:54:18.772893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:22.288 [2024-11-27 08:54:18.773052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:22.288 [2024-11-27 08:54:18.773170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:22.288 [2024-11-27 08:54:18.773189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.288 [2024-11-27 08:54:18.844663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:22.288 [2024-11-27 08:54:18.844917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.288 [2024-11-27 08:54:18.844962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:22.288 [2024-11-27 08:54:18.844978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.288 [2024-11-27 08:54:18.847920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.288 [2024-11-27 08:54:18.848111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:22.288 [2024-11-27 08:54:18.848209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:22.288 [2024-11-27 08:54:18.848296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:22.288 [2024-11-27 08:54:18.848473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:22.288 spare 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.288 [2024-11-27 08:54:18.948599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:22.288 [2024-11-27 08:54:18.948638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:22.288 [2024-11-27 08:54:18.948760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:22.288 [2024-11-27 08:54:18.948872] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:22.288 [2024-11-27 08:54:18.948886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:22.288 [2024-11-27 08:54:18.949027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.288 08:54:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.288 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.288 "name": "raid_bdev1", 00:22:22.288 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:22.288 "strip_size_kb": 0, 00:22:22.288 "state": "online", 00:22:22.288 "raid_level": "raid1", 00:22:22.288 "superblock": true, 00:22:22.288 "num_base_bdevs": 2, 00:22:22.288 "num_base_bdevs_discovered": 2, 00:22:22.288 "num_base_bdevs_operational": 2, 00:22:22.288 "base_bdevs_list": [ 00:22:22.288 { 00:22:22.288 "name": "spare", 00:22:22.288 "uuid": "10d8f82e-b0fc-5f96-b376-0b26b7e249ce", 00:22:22.288 "is_configured": true, 00:22:22.288 "data_offset": 256, 00:22:22.288 "data_size": 7936 00:22:22.288 }, 00:22:22.288 { 00:22:22.288 "name": "BaseBdev2", 00:22:22.288 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:22.288 "is_configured": true, 00:22:22.288 "data_offset": 256, 00:22:22.288 "data_size": 7936 00:22:22.288 } 00:22:22.288 ] 00:22:22.288 }' 00:22:22.288 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.288 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:22.855 "name": "raid_bdev1", 00:22:22.855 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:22.855 "strip_size_kb": 0, 00:22:22.855 "state": "online", 00:22:22.855 "raid_level": "raid1", 00:22:22.855 "superblock": true, 00:22:22.855 "num_base_bdevs": 2, 00:22:22.855 "num_base_bdevs_discovered": 2, 00:22:22.855 "num_base_bdevs_operational": 2, 00:22:22.855 "base_bdevs_list": [ 00:22:22.855 { 00:22:22.855 "name": "spare", 00:22:22.855 "uuid": "10d8f82e-b0fc-5f96-b376-0b26b7e249ce", 00:22:22.855 "is_configured": true, 00:22:22.855 "data_offset": 256, 00:22:22.855 "data_size": 7936 00:22:22.855 }, 00:22:22.855 { 00:22:22.855 "name": "BaseBdev2", 00:22:22.855 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:22.855 "is_configured": true, 00:22:22.855 "data_offset": 256, 00:22:22.855 "data_size": 7936 00:22:22.855 } 00:22:22.855 ] 00:22:22.855 }' 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.855 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:23.114 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.114 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:23.114 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:23.114 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.114 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:23.114 [2024-11-27 08:54:19.653360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:23.114 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.114 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:23.114 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:23.114 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:23.114 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:23.114 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:23.114 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:23.114 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.114 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.114 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.115 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.115 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.115 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.115 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:23.115 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.115 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.115 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.115 "name": "raid_bdev1", 00:22:23.115 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:23.115 "strip_size_kb": 0, 00:22:23.115 "state": "online", 00:22:23.115 "raid_level": "raid1", 00:22:23.115 "superblock": true, 00:22:23.115 "num_base_bdevs": 2, 00:22:23.115 "num_base_bdevs_discovered": 1, 00:22:23.115 "num_base_bdevs_operational": 1, 00:22:23.115 "base_bdevs_list": [ 00:22:23.115 { 00:22:23.115 "name": null, 00:22:23.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.115 "is_configured": false, 00:22:23.115 "data_offset": 0, 00:22:23.115 "data_size": 7936 00:22:23.115 }, 00:22:23.115 { 00:22:23.115 "name": "BaseBdev2", 00:22:23.115 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:23.115 "is_configured": true, 00:22:23.115 "data_offset": 256, 00:22:23.115 "data_size": 7936 00:22:23.115 } 00:22:23.115 ] 00:22:23.115 }' 00:22:23.115 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.115 08:54:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:23.682 08:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:23.682 08:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.682 08:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:23.682 [2024-11-27 08:54:20.145550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:23.682 [2024-11-27 08:54:20.145837] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:23.682 [2024-11-27 08:54:20.145864] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:23.682 [2024-11-27 08:54:20.145923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:23.682 [2024-11-27 08:54:20.162758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:23.682 08:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.682 08:54:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:23.682 [2024-11-27 08:54:20.165791] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:24.632 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:24.632 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:24.632 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:24.632 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:24.632 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:24.632 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.632 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.632 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.632 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:24.632 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.632 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:24.632 "name": "raid_bdev1", 00:22:24.632 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:24.632 "strip_size_kb": 0, 00:22:24.633 "state": "online", 00:22:24.633 "raid_level": "raid1", 00:22:24.633 "superblock": true, 00:22:24.633 "num_base_bdevs": 2, 00:22:24.633 "num_base_bdevs_discovered": 2, 00:22:24.633 "num_base_bdevs_operational": 2, 00:22:24.633 "process": { 00:22:24.633 "type": "rebuild", 00:22:24.633 "target": "spare", 00:22:24.633 "progress": { 00:22:24.633 "blocks": 2304, 00:22:24.633 "percent": 29 00:22:24.633 } 00:22:24.633 }, 00:22:24.633 "base_bdevs_list": [ 00:22:24.633 { 00:22:24.633 "name": "spare", 00:22:24.633 "uuid": "10d8f82e-b0fc-5f96-b376-0b26b7e249ce", 00:22:24.633 "is_configured": true, 00:22:24.633 "data_offset": 256, 00:22:24.633 "data_size": 7936 00:22:24.633 }, 00:22:24.633 { 00:22:24.633 "name": "BaseBdev2", 00:22:24.633 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:24.633 "is_configured": true, 00:22:24.633 "data_offset": 256, 00:22:24.633 "data_size": 7936 00:22:24.633 } 00:22:24.633 ] 00:22:24.633 }' 00:22:24.633 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:24.633 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:24.633 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:24.633 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:24.633 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:24.633 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.633 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:24.633 [2024-11-27 08:54:21.331271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:24.633 [2024-11-27 08:54:21.377171] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:24.633 [2024-11-27 08:54:21.377443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.633 [2024-11-27 08:54:21.377701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:24.633 [2024-11-27 08:54:21.377763] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.941 "name": "raid_bdev1", 00:22:24.941 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:24.941 "strip_size_kb": 0, 00:22:24.941 "state": "online", 00:22:24.941 "raid_level": "raid1", 00:22:24.941 "superblock": true, 00:22:24.941 "num_base_bdevs": 2, 00:22:24.941 "num_base_bdevs_discovered": 1, 00:22:24.941 "num_base_bdevs_operational": 1, 00:22:24.941 "base_bdevs_list": [ 00:22:24.941 { 00:22:24.941 "name": null, 00:22:24.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.941 "is_configured": false, 00:22:24.941 "data_offset": 0, 00:22:24.941 "data_size": 7936 00:22:24.941 }, 00:22:24.941 { 00:22:24.941 "name": "BaseBdev2", 00:22:24.941 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:24.941 "is_configured": true, 00:22:24.941 "data_offset": 256, 00:22:24.941 "data_size": 7936 00:22:24.941 } 00:22:24.941 ] 00:22:24.941 }' 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.941 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:25.201 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:25.201 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.201 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:25.201 [2024-11-27 08:54:21.927656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:25.201 [2024-11-27 08:54:21.927762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.201 [2024-11-27 08:54:21.927804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:25.201 [2024-11-27 08:54:21.927825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.201 [2024-11-27 08:54:21.928118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.201 [2024-11-27 08:54:21.928149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:25.201 [2024-11-27 08:54:21.928240] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:25.201 [2024-11-27 08:54:21.928274] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:25.201 [2024-11-27 08:54:21.928290] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:25.201 [2024-11-27 08:54:21.928351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:25.201 [2024-11-27 08:54:21.945110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:25.201 spare 00:22:25.201 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.201 08:54:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:25.201 [2024-11-27 08:54:21.948175] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:26.581 08:54:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.581 08:54:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:26.581 08:54:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:26.581 08:54:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:26.581 08:54:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:26.581 08:54:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.581 08:54:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.581 08:54:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.581 08:54:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:26.581 08:54:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:26.581 "name": "raid_bdev1", 00:22:26.581 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:26.581 "strip_size_kb": 0, 00:22:26.581 "state": "online", 00:22:26.581 "raid_level": "raid1", 00:22:26.581 "superblock": true, 00:22:26.581 "num_base_bdevs": 2, 00:22:26.581 "num_base_bdevs_discovered": 2, 00:22:26.581 "num_base_bdevs_operational": 2, 00:22:26.581 "process": { 00:22:26.581 "type": "rebuild", 00:22:26.581 "target": "spare", 00:22:26.581 "progress": { 00:22:26.581 "blocks": 2560, 00:22:26.581 "percent": 32 00:22:26.581 } 00:22:26.581 }, 00:22:26.581 "base_bdevs_list": [ 00:22:26.581 { 00:22:26.581 "name": "spare", 00:22:26.581 "uuid": "10d8f82e-b0fc-5f96-b376-0b26b7e249ce", 00:22:26.581 "is_configured": true, 00:22:26.581 "data_offset": 256, 00:22:26.581 "data_size": 7936 00:22:26.581 }, 00:22:26.581 { 00:22:26.581 "name": "BaseBdev2", 00:22:26.581 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:26.581 "is_configured": true, 00:22:26.581 "data_offset": 256, 00:22:26.581 "data_size": 7936 00:22:26.581 } 00:22:26.581 ] 00:22:26.581 }' 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:26.581 [2024-11-27 08:54:23.113533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:26.581 [2024-11-27 08:54:23.159148] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:26.581 [2024-11-27 08:54:23.159394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.581 [2024-11-27 08:54:23.159436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:26.581 [2024-11-27 08:54:23.159450] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.581 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.581 "name": "raid_bdev1", 00:22:26.581 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:26.581 "strip_size_kb": 0, 00:22:26.581 "state": "online", 00:22:26.581 "raid_level": "raid1", 00:22:26.581 "superblock": true, 00:22:26.581 "num_base_bdevs": 2, 00:22:26.581 "num_base_bdevs_discovered": 1, 00:22:26.581 "num_base_bdevs_operational": 1, 00:22:26.581 "base_bdevs_list": [ 00:22:26.581 { 00:22:26.581 "name": null, 00:22:26.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.581 "is_configured": false, 00:22:26.581 "data_offset": 0, 00:22:26.581 "data_size": 7936 00:22:26.581 }, 00:22:26.581 { 00:22:26.582 "name": "BaseBdev2", 00:22:26.582 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:26.582 "is_configured": true, 00:22:26.582 "data_offset": 256, 00:22:26.582 "data_size": 7936 00:22:26.582 } 00:22:26.582 ] 00:22:26.582 }' 00:22:26.582 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.582 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:27.149 "name": "raid_bdev1", 00:22:27.149 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:27.149 "strip_size_kb": 0, 00:22:27.149 "state": "online", 00:22:27.149 "raid_level": "raid1", 00:22:27.149 "superblock": true, 00:22:27.149 "num_base_bdevs": 2, 00:22:27.149 "num_base_bdevs_discovered": 1, 00:22:27.149 "num_base_bdevs_operational": 1, 00:22:27.149 "base_bdevs_list": [ 00:22:27.149 { 00:22:27.149 "name": null, 00:22:27.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.149 "is_configured": false, 00:22:27.149 "data_offset": 0, 00:22:27.149 "data_size": 7936 00:22:27.149 }, 00:22:27.149 { 00:22:27.149 "name": "BaseBdev2", 00:22:27.149 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:27.149 "is_configured": true, 00:22:27.149 "data_offset": 256, 00:22:27.149 "data_size": 7936 00:22:27.149 } 00:22:27.149 ] 00:22:27.149 }' 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.149 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:27.149 [2024-11-27 08:54:23.849018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:27.149 [2024-11-27 08:54:23.849226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.149 [2024-11-27 08:54:23.849276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:27.149 [2024-11-27 08:54:23.849292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.149 [2024-11-27 08:54:23.849571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.149 [2024-11-27 08:54:23.849595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:27.149 [2024-11-27 08:54:23.849674] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:27.149 [2024-11-27 08:54:23.849697] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:27.149 [2024-11-27 08:54:23.849712] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:27.149 [2024-11-27 08:54:23.849726] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:27.149 BaseBdev1 00:22:27.150 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.150 08:54:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.526 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.526 "name": "raid_bdev1", 00:22:28.526 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:28.526 "strip_size_kb": 0, 00:22:28.526 "state": "online", 00:22:28.526 "raid_level": "raid1", 00:22:28.526 "superblock": true, 00:22:28.527 "num_base_bdevs": 2, 00:22:28.527 "num_base_bdevs_discovered": 1, 00:22:28.527 "num_base_bdevs_operational": 1, 00:22:28.527 "base_bdevs_list": [ 00:22:28.527 { 00:22:28.527 "name": null, 00:22:28.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.527 "is_configured": false, 00:22:28.527 "data_offset": 0, 00:22:28.527 "data_size": 7936 00:22:28.527 }, 00:22:28.527 { 00:22:28.527 "name": "BaseBdev2", 00:22:28.527 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:28.527 "is_configured": true, 00:22:28.527 "data_offset": 256, 00:22:28.527 "data_size": 7936 00:22:28.527 } 00:22:28.527 ] 00:22:28.527 }' 00:22:28.527 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.527 08:54:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:28.786 "name": "raid_bdev1", 00:22:28.786 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:28.786 "strip_size_kb": 0, 00:22:28.786 "state": "online", 00:22:28.786 "raid_level": "raid1", 00:22:28.786 "superblock": true, 00:22:28.786 "num_base_bdevs": 2, 00:22:28.786 "num_base_bdevs_discovered": 1, 00:22:28.786 "num_base_bdevs_operational": 1, 00:22:28.786 "base_bdevs_list": [ 00:22:28.786 { 00:22:28.786 "name": null, 00:22:28.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.786 "is_configured": false, 00:22:28.786 "data_offset": 0, 00:22:28.786 "data_size": 7936 00:22:28.786 }, 00:22:28.786 { 00:22:28.786 "name": "BaseBdev2", 00:22:28.786 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:28.786 "is_configured": true, 00:22:28.786 "data_offset": 256, 00:22:28.786 "data_size": 7936 00:22:28.786 } 00:22:28.786 ] 00:22:28.786 }' 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.786 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:29.045 [2024-11-27 08:54:25.545618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:29.045 [2024-11-27 08:54:25.545988] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:29.046 [2024-11-27 08:54:25.546026] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:29.046 request: 00:22:29.046 { 00:22:29.046 "base_bdev": "BaseBdev1", 00:22:29.046 "raid_bdev": "raid_bdev1", 00:22:29.046 "method": "bdev_raid_add_base_bdev", 00:22:29.046 "req_id": 1 00:22:29.046 } 00:22:29.046 Got JSON-RPC error response 00:22:29.046 response: 00:22:29.046 { 00:22:29.046 "code": -22, 00:22:29.046 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:29.046 } 00:22:29.046 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:29.046 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:22:29.046 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.046 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.046 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.046 08:54:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.981 "name": "raid_bdev1", 00:22:29.981 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:29.981 "strip_size_kb": 0, 00:22:29.981 "state": "online", 00:22:29.981 "raid_level": "raid1", 00:22:29.981 "superblock": true, 00:22:29.981 "num_base_bdevs": 2, 00:22:29.981 "num_base_bdevs_discovered": 1, 00:22:29.981 "num_base_bdevs_operational": 1, 00:22:29.981 "base_bdevs_list": [ 00:22:29.981 { 00:22:29.981 "name": null, 00:22:29.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.981 "is_configured": false, 00:22:29.981 "data_offset": 0, 00:22:29.981 "data_size": 7936 00:22:29.981 }, 00:22:29.981 { 00:22:29.981 "name": "BaseBdev2", 00:22:29.981 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:29.981 "is_configured": true, 00:22:29.981 "data_offset": 256, 00:22:29.981 "data_size": 7936 00:22:29.981 } 00:22:29.981 ] 00:22:29.981 }' 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.981 08:54:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:30.548 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:30.548 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:30.548 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:30.548 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:30.548 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:30.549 "name": "raid_bdev1", 00:22:30.549 "uuid": "4680e5bb-5632-4d9c-b6ad-d25f82363e1c", 00:22:30.549 "strip_size_kb": 0, 00:22:30.549 "state": "online", 00:22:30.549 "raid_level": "raid1", 00:22:30.549 "superblock": true, 00:22:30.549 "num_base_bdevs": 2, 00:22:30.549 "num_base_bdevs_discovered": 1, 00:22:30.549 "num_base_bdevs_operational": 1, 00:22:30.549 "base_bdevs_list": [ 00:22:30.549 { 00:22:30.549 "name": null, 00:22:30.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.549 "is_configured": false, 00:22:30.549 "data_offset": 0, 00:22:30.549 "data_size": 7936 00:22:30.549 }, 00:22:30.549 { 00:22:30.549 "name": "BaseBdev2", 00:22:30.549 "uuid": "02adfa26-bd85-55af-ac40-3a91467029d1", 00:22:30.549 "is_configured": true, 00:22:30.549 "data_offset": 256, 00:22:30.549 "data_size": 7936 00:22:30.549 } 00:22:30.549 ] 00:22:30.549 }' 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89638 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' -z 89638 ']' 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # kill -0 89638 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # uname 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 89638 00:22:30.549 killing process with pid 89638 00:22:30.549 Received shutdown signal, test time was about 60.000000 seconds 00:22:30.549 00:22:30.549 Latency(us) 00:22:30.549 [2024-11-27T08:54:27.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.549 [2024-11-27T08:54:27.309Z] =================================================================================================================== 00:22:30.549 [2024-11-27T08:54:27.309Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # echo 'killing process with pid 89638' 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # kill 89638 00:22:30.549 08:54:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@975 -- # wait 89638 00:22:30.549 [2024-11-27 08:54:27.263280] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:30.549 [2024-11-27 08:54:27.263598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:30.549 [2024-11-27 08:54:27.263705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:30.549 [2024-11-27 08:54:27.263729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:30.808 [2024-11-27 08:54:27.546688] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:32.209 08:54:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:22:32.209 00:22:32.209 real 0m18.597s 00:22:32.209 user 0m25.224s 00:22:32.209 sys 0m1.465s 00:22:32.209 ************************************ 00:22:32.209 END TEST raid_rebuild_test_sb_md_interleaved 00:22:32.209 ************************************ 00:22:32.209 08:54:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # xtrace_disable 00:22:32.209 08:54:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:32.209 08:54:28 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:22:32.209 08:54:28 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:22:32.209 08:54:28 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89638 ']' 00:22:32.209 08:54:28 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89638 00:22:32.209 08:54:28 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:22:32.209 00:22:32.209 real 13m11.868s 00:22:32.209 user 18m29.726s 00:22:32.209 sys 1m52.544s 00:22:32.209 08:54:28 bdev_raid -- common/autotest_common.sh@1127 -- # xtrace_disable 00:22:32.209 08:54:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:32.209 ************************************ 00:22:32.209 END TEST bdev_raid 00:22:32.209 ************************************ 00:22:32.209 08:54:28 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:32.209 08:54:28 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:22:32.209 08:54:28 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:22:32.209 08:54:28 -- common/autotest_common.sh@10 -- # set +x 00:22:32.209 ************************************ 00:22:32.209 START TEST spdkcli_raid 00:22:32.209 ************************************ 00:22:32.209 08:54:28 spdkcli_raid -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:32.209 * Looking for test storage... 00:22:32.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:32.209 08:54:28 spdkcli_raid -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:32.209 08:54:28 spdkcli_raid -- common/autotest_common.sh@1690 -- # lcov --version 00:22:32.209 08:54:28 spdkcli_raid -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:32.469 08:54:28 spdkcli_raid -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:32.469 08:54:28 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:22:32.469 08:54:28 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:32.469 08:54:28 spdkcli_raid -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:32.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.469 --rc genhtml_branch_coverage=1 00:22:32.469 --rc genhtml_function_coverage=1 00:22:32.469 --rc genhtml_legend=1 00:22:32.469 --rc geninfo_all_blocks=1 00:22:32.469 --rc geninfo_unexecuted_blocks=1 00:22:32.469 00:22:32.469 ' 00:22:32.469 08:54:28 spdkcli_raid -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:32.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.469 --rc genhtml_branch_coverage=1 00:22:32.469 --rc genhtml_function_coverage=1 00:22:32.469 --rc genhtml_legend=1 00:22:32.469 --rc geninfo_all_blocks=1 00:22:32.469 --rc geninfo_unexecuted_blocks=1 00:22:32.469 00:22:32.469 ' 00:22:32.469 08:54:28 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:32.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.469 --rc genhtml_branch_coverage=1 00:22:32.469 --rc genhtml_function_coverage=1 00:22:32.469 --rc genhtml_legend=1 00:22:32.469 --rc geninfo_all_blocks=1 00:22:32.469 --rc geninfo_unexecuted_blocks=1 00:22:32.469 00:22:32.469 ' 00:22:32.469 08:54:28 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:32.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.469 --rc genhtml_branch_coverage=1 00:22:32.469 --rc genhtml_function_coverage=1 00:22:32.469 --rc genhtml_legend=1 00:22:32.469 --rc geninfo_all_blocks=1 00:22:32.469 --rc geninfo_unexecuted_blocks=1 00:22:32.469 00:22:32.469 ' 00:22:32.469 08:54:28 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:32.469 08:54:28 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:32.469 08:54:28 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:32.469 08:54:28 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:22:32.469 08:54:28 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:22:32.469 08:54:28 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:22:32.469 08:54:28 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:22:32.469 08:54:28 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:32.470 08:54:28 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:32.470 08:54:29 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:32.470 08:54:29 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:32.470 08:54:29 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:32.470 08:54:29 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:32.470 08:54:29 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:22:32.470 08:54:29 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:22:32.470 08:54:29 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.470 08:54:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:32.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.470 08:54:29 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:22:32.470 08:54:29 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90315 00:22:32.470 08:54:29 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:22:32.470 08:54:29 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90315 00:22:32.470 08:54:29 spdkcli_raid -- common/autotest_common.sh@832 -- # '[' -z 90315 ']' 00:22:32.470 08:54:29 spdkcli_raid -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.470 08:54:29 spdkcli_raid -- common/autotest_common.sh@837 -- # local max_retries=100 00:22:32.470 08:54:29 spdkcli_raid -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.470 08:54:29 spdkcli_raid -- common/autotest_common.sh@841 -- # xtrace_disable 00:22:32.470 08:54:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:32.470 [2024-11-27 08:54:29.132037] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:22:32.470 [2024-11-27 08:54:29.132510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90315 ] 00:22:32.728 [2024-11-27 08:54:29.314647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:32.987 [2024-11-27 08:54:29.487736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.987 [2024-11-27 08:54:29.487743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.923 08:54:30 spdkcli_raid -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:22:33.923 08:54:30 spdkcli_raid -- common/autotest_common.sh@865 -- # return 0 00:22:33.923 08:54:30 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:22:33.924 08:54:30 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:33.924 08:54:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:33.924 08:54:30 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:22:33.924 08:54:30 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.924 08:54:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:33.924 08:54:30 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:33.924 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:33.924 ' 00:22:35.300 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:22:35.300 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:22:35.560 08:54:32 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:22:35.560 08:54:32 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:35.560 08:54:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:35.560 08:54:32 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:22:35.560 08:54:32 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.560 08:54:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:35.560 08:54:32 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:22:35.560 ' 00:22:36.938 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:22:36.938 08:54:33 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:22:36.938 08:54:33 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:36.938 08:54:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:36.938 08:54:33 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:22:36.938 08:54:33 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:36.938 08:54:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:36.938 08:54:33 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:22:36.938 08:54:33 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:22:37.505 08:54:33 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:22:37.505 08:54:34 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:22:37.505 08:54:34 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:22:37.505 08:54:34 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:37.505 08:54:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:37.505 08:54:34 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:22:37.505 08:54:34 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:37.505 08:54:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:37.505 08:54:34 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:22:37.505 ' 00:22:38.441 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:22:38.700 08:54:35 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:22:38.700 08:54:35 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:38.700 08:54:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:38.700 08:54:35 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:22:38.700 08:54:35 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:38.700 08:54:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:38.700 08:54:35 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:22:38.700 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:22:38.700 ' 00:22:40.075 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:22:40.075 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:22:40.075 08:54:36 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:22:40.075 08:54:36 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.075 08:54:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:40.075 08:54:36 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90315 00:22:40.075 08:54:36 spdkcli_raid -- common/autotest_common.sh@951 -- # '[' -z 90315 ']' 00:22:40.075 08:54:36 spdkcli_raid -- common/autotest_common.sh@955 -- # kill -0 90315 00:22:40.075 08:54:36 spdkcli_raid -- common/autotest_common.sh@956 -- # uname 00:22:40.334 08:54:36 spdkcli_raid -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:22:40.334 08:54:36 spdkcli_raid -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 90315 00:22:40.334 08:54:36 spdkcli_raid -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:22:40.334 08:54:36 spdkcli_raid -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:22:40.334 08:54:36 spdkcli_raid -- common/autotest_common.sh@969 -- # echo 'killing process with pid 90315' 00:22:40.334 killing process with pid 90315 00:22:40.334 08:54:36 spdkcli_raid -- common/autotest_common.sh@970 -- # kill 90315 00:22:40.334 08:54:36 spdkcli_raid -- common/autotest_common.sh@975 -- # wait 90315 00:22:42.864 08:54:39 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:22:42.864 08:54:39 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90315 ']' 00:22:42.864 08:54:39 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90315 00:22:42.864 Process with pid 90315 is not found 00:22:42.864 08:54:39 spdkcli_raid -- common/autotest_common.sh@951 -- # '[' -z 90315 ']' 00:22:42.864 08:54:39 spdkcli_raid -- common/autotest_common.sh@955 -- # kill -0 90315 00:22:42.864 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 955: kill: (90315) - No such process 00:22:42.864 08:54:39 spdkcli_raid -- common/autotest_common.sh@978 -- # echo 'Process with pid 90315 is not found' 00:22:42.864 08:54:39 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:22:42.864 08:54:39 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:42.864 08:54:39 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:42.864 08:54:39 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:42.864 ************************************ 00:22:42.864 END TEST spdkcli_raid 00:22:42.864 ************************************ 00:22:42.864 00:22:42.864 real 0m10.441s 00:22:42.864 user 0m21.453s 00:22:42.864 sys 0m1.158s 00:22:42.864 08:54:39 spdkcli_raid -- common/autotest_common.sh@1127 -- # xtrace_disable 00:22:42.864 08:54:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:42.864 08:54:39 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:22:42.864 08:54:39 -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:22:42.864 08:54:39 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:22:42.864 08:54:39 -- common/autotest_common.sh@10 -- # set +x 00:22:42.864 ************************************ 00:22:42.864 START TEST blockdev_raid5f 00:22:42.864 ************************************ 00:22:42.864 08:54:39 blockdev_raid5f -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:22:42.864 * Looking for test storage... 00:22:42.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:42.864 08:54:39 blockdev_raid5f -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:42.864 08:54:39 blockdev_raid5f -- common/autotest_common.sh@1690 -- # lcov --version 00:22:42.864 08:54:39 blockdev_raid5f -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:42.864 08:54:39 blockdev_raid5f -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:42.864 08:54:39 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:22:42.864 08:54:39 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:42.864 08:54:39 blockdev_raid5f -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:42.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.864 --rc genhtml_branch_coverage=1 00:22:42.864 --rc genhtml_function_coverage=1 00:22:42.864 --rc genhtml_legend=1 00:22:42.864 --rc geninfo_all_blocks=1 00:22:42.864 --rc geninfo_unexecuted_blocks=1 00:22:42.864 00:22:42.864 ' 00:22:42.864 08:54:39 blockdev_raid5f -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:42.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.864 --rc genhtml_branch_coverage=1 00:22:42.864 --rc genhtml_function_coverage=1 00:22:42.864 --rc genhtml_legend=1 00:22:42.864 --rc geninfo_all_blocks=1 00:22:42.864 --rc geninfo_unexecuted_blocks=1 00:22:42.864 00:22:42.864 ' 00:22:42.864 08:54:39 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:42.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.864 --rc genhtml_branch_coverage=1 00:22:42.864 --rc genhtml_function_coverage=1 00:22:42.864 --rc genhtml_legend=1 00:22:42.864 --rc geninfo_all_blocks=1 00:22:42.864 --rc geninfo_unexecuted_blocks=1 00:22:42.864 00:22:42.864 ' 00:22:42.864 08:54:39 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:42.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.864 --rc genhtml_branch_coverage=1 00:22:42.864 --rc genhtml_function_coverage=1 00:22:42.864 --rc genhtml_legend=1 00:22:42.864 --rc geninfo_all_blocks=1 00:22:42.864 --rc geninfo_unexecuted_blocks=1 00:22:42.864 00:22:42.864 ' 00:22:42.864 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:42.864 08:54:39 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:22:42.864 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:42.864 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:42.864 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:42.864 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:42.864 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:42.864 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:42.864 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:22:42.864 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:22:42.864 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:22:42.864 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:22:42.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90601 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90601 00:22:42.865 08:54:39 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:42.865 08:54:39 blockdev_raid5f -- common/autotest_common.sh@832 -- # '[' -z 90601 ']' 00:22:42.865 08:54:39 blockdev_raid5f -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.865 08:54:39 blockdev_raid5f -- common/autotest_common.sh@837 -- # local max_retries=100 00:22:42.865 08:54:39 blockdev_raid5f -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.865 08:54:39 blockdev_raid5f -- common/autotest_common.sh@841 -- # xtrace_disable 00:22:42.865 08:54:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:43.123 [2024-11-27 08:54:39.629245] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:22:43.123 [2024-11-27 08:54:39.629439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90601 ] 00:22:43.123 [2024-11-27 08:54:39.809166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.382 [2024-11-27 08:54:39.955484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.317 08:54:40 blockdev_raid5f -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:22:44.317 08:54:40 blockdev_raid5f -- common/autotest_common.sh@865 -- # return 0 00:22:44.317 08:54:40 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:22:44.317 08:54:40 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:22:44.317 08:54:40 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:22:44.317 08:54:40 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.317 08:54:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.317 Malloc0 00:22:44.317 Malloc1 00:22:44.317 Malloc2 00:22:44.317 08:54:41 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.317 08:54:41 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:22:44.317 08:54:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.317 08:54:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.317 08:54:41 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.317 08:54:41 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:22:44.317 08:54:41 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:22:44.317 08:54:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.317 08:54:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.317 08:54:41 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.317 08:54:41 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:22:44.317 08:54:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.317 08:54:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.575 08:54:41 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.575 08:54:41 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:22:44.575 08:54:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.575 08:54:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.575 08:54:41 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.575 08:54:41 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:22:44.575 08:54:41 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:22:44.575 08:54:41 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:22:44.575 08:54:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.575 08:54:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.575 08:54:41 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.575 08:54:41 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:22:44.575 08:54:41 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "18cf9b6e-c531-41e7-b819-f0a897f87452"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "18cf9b6e-c531-41e7-b819-f0a897f87452",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "18cf9b6e-c531-41e7-b819-f0a897f87452",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "2e3c39d4-d12c-4f70-9c74-e3e128d28f72",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "76366175-ce19-4cf6-a262-5c842a37efad",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "1fe1964f-6e27-4530-b090-077f1f0e14b3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:44.575 08:54:41 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:22:44.575 08:54:41 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:22:44.575 08:54:41 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:22:44.575 08:54:41 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:22:44.575 08:54:41 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90601 00:22:44.575 08:54:41 blockdev_raid5f -- common/autotest_common.sh@951 -- # '[' -z 90601 ']' 00:22:44.575 08:54:41 blockdev_raid5f -- common/autotest_common.sh@955 -- # kill -0 90601 00:22:44.575 08:54:41 blockdev_raid5f -- common/autotest_common.sh@956 -- # uname 00:22:44.576 08:54:41 blockdev_raid5f -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:22:44.576 08:54:41 blockdev_raid5f -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 90601 00:22:44.576 killing process with pid 90601 00:22:44.576 08:54:41 blockdev_raid5f -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:22:44.576 08:54:41 blockdev_raid5f -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:22:44.576 08:54:41 blockdev_raid5f -- common/autotest_common.sh@969 -- # echo 'killing process with pid 90601' 00:22:44.576 08:54:41 blockdev_raid5f -- common/autotest_common.sh@970 -- # kill 90601 00:22:44.576 08:54:41 blockdev_raid5f -- common/autotest_common.sh@975 -- # wait 90601 00:22:47.858 08:54:43 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:47.858 08:54:43 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:47.858 08:54:43 blockdev_raid5f -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:22:47.858 08:54:43 blockdev_raid5f -- common/autotest_common.sh@1108 -- # xtrace_disable 00:22:47.858 08:54:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:47.858 ************************************ 00:22:47.858 START TEST bdev_hello_world 00:22:47.858 ************************************ 00:22:47.858 08:54:43 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:47.858 [2024-11-27 08:54:43.992921] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:22:47.858 [2024-11-27 08:54:43.993115] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90667 ] 00:22:47.858 [2024-11-27 08:54:44.181089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.858 [2024-11-27 08:54:44.322761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.424 [2024-11-27 08:54:44.885574] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:48.424 [2024-11-27 08:54:44.885653] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:22:48.424 [2024-11-27 08:54:44.885679] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:48.424 [2024-11-27 08:54:44.886270] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:48.424 [2024-11-27 08:54:44.886457] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:48.424 [2024-11-27 08:54:44.886485] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:48.424 [2024-11-27 08:54:44.886580] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:48.424 00:22:48.424 [2024-11-27 08:54:44.886611] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:22:49.800 00:22:49.800 real 0m2.381s 00:22:49.800 user 0m1.901s 00:22:49.800 sys 0m0.360s 00:22:49.800 08:54:46 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # xtrace_disable 00:22:49.800 ************************************ 00:22:49.800 END TEST bdev_hello_world 00:22:49.800 ************************************ 00:22:49.800 08:54:46 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:49.800 08:54:46 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:22:49.800 08:54:46 blockdev_raid5f -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:22:49.800 08:54:46 blockdev_raid5f -- common/autotest_common.sh@1108 -- # xtrace_disable 00:22:49.800 08:54:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:49.800 ************************************ 00:22:49.800 START TEST bdev_bounds 00:22:49.800 ************************************ 00:22:49.800 08:54:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # bdev_bounds '' 00:22:49.800 Process bdevio pid: 90710 00:22:49.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.800 08:54:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90710 00:22:49.800 08:54:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:22:49.800 08:54:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90710' 00:22:49.800 08:54:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:49.800 08:54:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90710 00:22:49.800 08:54:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@832 -- # '[' -z 90710 ']' 00:22:49.800 08:54:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.800 08:54:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local max_retries=100 00:22:49.800 08:54:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.800 08:54:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@841 -- # xtrace_disable 00:22:49.800 08:54:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:49.800 [2024-11-27 08:54:46.426380] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:22:49.800 [2024-11-27 08:54:46.426845] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90710 ] 00:22:50.058 [2024-11-27 08:54:46.615701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:50.058 [2024-11-27 08:54:46.781002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.058 [2024-11-27 08:54:46.781114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.058 [2024-11-27 08:54:46.781141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.996 08:54:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:22:50.996 08:54:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@865 -- # return 0 00:22:50.997 08:54:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:22:50.997 I/O targets: 00:22:50.997 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:22:50.997 00:22:50.997 00:22:50.997 CUnit - A unit testing framework for C - Version 2.1-3 00:22:50.997 http://cunit.sourceforge.net/ 00:22:50.997 00:22:50.997 00:22:50.997 Suite: bdevio tests on: raid5f 00:22:50.997 Test: blockdev write read block ...passed 00:22:50.997 Test: blockdev write zeroes read block ...passed 00:22:50.997 Test: blockdev write zeroes read no split ...passed 00:22:50.997 Test: blockdev write zeroes read split ...passed 00:22:51.254 Test: blockdev write zeroes read split partial ...passed 00:22:51.254 Test: blockdev reset ...passed 00:22:51.254 Test: blockdev write read 8 blocks ...passed 00:22:51.254 Test: blockdev write read size > 128k ...passed 00:22:51.254 Test: blockdev write read invalid size ...passed 00:22:51.254 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:51.254 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:51.254 Test: blockdev write read max offset ...passed 00:22:51.254 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:51.255 Test: blockdev writev readv 8 blocks ...passed 00:22:51.255 Test: blockdev writev readv 30 x 1block ...passed 00:22:51.255 Test: blockdev writev readv block ...passed 00:22:51.255 Test: blockdev writev readv size > 128k ...passed 00:22:51.255 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:51.255 Test: blockdev comparev and writev ...passed 00:22:51.255 Test: blockdev nvme passthru rw ...passed 00:22:51.255 Test: blockdev nvme passthru vendor specific ...passed 00:22:51.255 Test: blockdev nvme admin passthru ...passed 00:22:51.255 Test: blockdev copy ...passed 00:22:51.255 00:22:51.255 Run Summary: Type Total Ran Passed Failed Inactive 00:22:51.255 suites 1 1 n/a 0 0 00:22:51.255 tests 23 23 23 0 0 00:22:51.255 asserts 130 130 130 0 n/a 00:22:51.255 00:22:51.255 Elapsed time = 0.615 seconds 00:22:51.255 0 00:22:51.255 08:54:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90710 00:22:51.255 08:54:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@951 -- # '[' -z 90710 ']' 00:22:51.255 08:54:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # kill -0 90710 00:22:51.255 08:54:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # uname 00:22:51.255 08:54:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:22:51.255 08:54:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 90710 00:22:51.255 08:54:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:22:51.255 08:54:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:22:51.255 08:54:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # echo 'killing process with pid 90710' 00:22:51.255 killing process with pid 90710 00:22:51.255 08:54:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # kill 90710 00:22:51.255 08:54:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@975 -- # wait 90710 00:22:52.634 08:54:49 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:22:52.634 00:22:52.634 real 0m2.947s 00:22:52.634 user 0m7.213s 00:22:52.634 sys 0m0.510s 00:22:52.634 08:54:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # xtrace_disable 00:22:52.634 08:54:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:52.634 ************************************ 00:22:52.634 END TEST bdev_bounds 00:22:52.634 ************************************ 00:22:52.634 08:54:49 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:52.634 08:54:49 blockdev_raid5f -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:22:52.634 08:54:49 blockdev_raid5f -- common/autotest_common.sh@1108 -- # xtrace_disable 00:22:52.634 08:54:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:52.634 ************************************ 00:22:52.634 START TEST bdev_nbd 00:22:52.634 ************************************ 00:22:52.634 08:54:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:52.634 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90771 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90771 /var/tmp/spdk-nbd.sock 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@832 -- # '[' -z 90771 ']' 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local max_retries=100 00:22:52.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@841 -- # xtrace_disable 00:22:52.635 08:54:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:52.902 [2024-11-27 08:54:49.427653] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:22:52.902 [2024-11-27 08:54:49.427849] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.902 [2024-11-27 08:54:49.609707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.159 [2024-11-27 08:54:49.760352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.723 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:22:53.723 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@865 -- # return 0 00:22:53.723 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:22:53.723 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:53.723 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:22:53.723 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:22:53.723 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:22:53.723 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:53.723 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:22:53.723 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:22:53.723 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:22:53.723 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:22:53.723 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:22:53.723 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:53.723 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:22:53.983 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:22:53.983 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:22:53.983 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:22:53.983 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:22:53.983 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local i 00:22:53.983 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:53.983 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:53.983 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:22:54.243 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # break 00:22:54.243 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:22:54.243 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:22:54.243 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:54.243 1+0 records in 00:22:54.243 1+0 records out 00:22:54.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346421 s, 11.8 MB/s 00:22:54.243 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.243 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # size=4096 00:22:54.243 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:54.243 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:22:54.243 08:54:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # return 0 00:22:54.243 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:54.243 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:54.243 08:54:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:54.500 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:22:54.500 { 00:22:54.500 "nbd_device": "/dev/nbd0", 00:22:54.500 "bdev_name": "raid5f" 00:22:54.500 } 00:22:54.500 ]' 00:22:54.500 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:22:54.500 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:22:54.500 { 00:22:54.500 "nbd_device": "/dev/nbd0", 00:22:54.500 "bdev_name": "raid5f" 00:22:54.500 } 00:22:54.500 ]' 00:22:54.500 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:22:54.500 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:54.500 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:54.500 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:54.500 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:54.500 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:54.500 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.500 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:54.757 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:54.757 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:54.757 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:54.757 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:54.757 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.757 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:54.757 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:54.757 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:54.757 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:54.757 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:54.757 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:55.014 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:22:55.271 /dev/nbd0 00:22:55.271 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:55.271 08:54:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:55.271 08:54:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:22:55.271 08:54:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local i 00:22:55.271 08:54:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:55.271 08:54:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:55.271 08:54:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:22:55.271 08:54:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # break 00:22:55.271 08:54:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:22:55.271 08:54:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:22:55.271 08:54:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:55.271 1+0 records in 00:22:55.271 1+0 records out 00:22:55.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029904 s, 13.7 MB/s 00:22:55.271 08:54:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.271 08:54:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # size=4096 00:22:55.271 08:54:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.271 08:54:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:22:55.271 08:54:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # return 0 00:22:55.271 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:55.271 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:55.271 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:55.271 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:55.271 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:55.837 { 00:22:55.837 "nbd_device": "/dev/nbd0", 00:22:55.837 "bdev_name": "raid5f" 00:22:55.837 } 00:22:55.837 ]' 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:55.837 { 00:22:55.837 "nbd_device": "/dev/nbd0", 00:22:55.837 "bdev_name": "raid5f" 00:22:55.837 } 00:22:55.837 ]' 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:22:55.837 256+0 records in 00:22:55.837 256+0 records out 00:22:55.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524663 s, 200 MB/s 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:55.837 256+0 records in 00:22:55.837 256+0 records out 00:22:55.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0399903 s, 26.2 MB/s 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:55.837 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:56.110 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:56.110 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:56.110 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:56.110 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:56.110 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:56.110 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:56.110 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:56.110 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:56.110 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:56.110 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:56.110 08:54:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:22:56.368 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:22:56.626 malloc_lvol_verify 00:22:56.884 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:22:57.141 aa337bc9-3ac8-41dd-9129-146f99040725 00:22:57.141 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:22:57.399 cb205b26-a015-4100-b114-4131737ce505 00:22:57.399 08:54:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:22:57.657 /dev/nbd0 00:22:57.657 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:22:57.657 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:22:57.657 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:22:57.657 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:22:57.657 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:22:57.657 mke2fs 1.47.0 (5-Feb-2023) 00:22:57.657 Discarding device blocks: 0/4096 done 00:22:57.657 Creating filesystem with 4096 1k blocks and 1024 inodes 00:22:57.657 00:22:57.657 Allocating group tables: 0/1 done 00:22:57.657 Writing inode tables: 0/1 done 00:22:57.657 Creating journal (1024 blocks): done 00:22:57.657 Writing superblocks and filesystem accounting information: 0/1 done 00:22:57.657 00:22:57.657 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:57.657 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:57.657 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:57.657 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:57.657 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:57.657 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:57.657 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90771 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@951 -- # '[' -z 90771 ']' 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # kill -0 90771 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # uname 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 90771 00:22:57.915 killing process with pid 90771 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # echo 'killing process with pid 90771' 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # kill 90771 00:22:57.915 08:54:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@975 -- # wait 90771 00:22:59.815 08:54:56 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:22:59.815 00:22:59.815 real 0m6.745s 00:22:59.815 user 0m9.602s 00:22:59.815 sys 0m1.455s 00:22:59.815 08:54:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # xtrace_disable 00:22:59.815 08:54:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:59.815 ************************************ 00:22:59.815 END TEST bdev_nbd 00:22:59.815 ************************************ 00:22:59.815 08:54:56 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:22:59.815 08:54:56 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:22:59.815 08:54:56 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:22:59.815 08:54:56 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:22:59.815 08:54:56 blockdev_raid5f -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:22:59.815 08:54:56 blockdev_raid5f -- common/autotest_common.sh@1108 -- # xtrace_disable 00:22:59.815 08:54:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:59.815 ************************************ 00:22:59.815 START TEST bdev_fio 00:22:59.815 ************************************ 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # fio_test_suite '' 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:22:59.815 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local workload=verify 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local bdev_type=AIO 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local env_context= 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local fio_dir=/usr/src/fio 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1292 -- # '[' -z verify ']' 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1296 -- # '[' -n '' ']' 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1300 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1302 -- # cat 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # '[' verify == verify ']' 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # cat 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # '[' AIO == AIO ']' 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # /usr/src/fio/fio --version 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # echo serialize_overlap=1 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1102 -- # '[' 11 -le 1 ']' 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1108 -- # xtrace_disable 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:59.815 ************************************ 00:22:59.815 START TEST bdev_fio_rw_verify 00:22:59.815 ************************************ 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1357 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # local fio_dir=/usr/src/fio 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local sanitizers 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # shift 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local asan_lib= 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # grep libasan 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # break 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1353 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:59.815 08:54:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1353 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:59.815 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:59.815 fio-3.35 00:22:59.815 Starting 1 thread 00:23:12.018 00:23:12.018 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90979: Wed Nov 27 08:55:07 2024 00:23:12.018 read: IOPS=8345, BW=32.6MiB/s (34.2MB/s)(326MiB/10001msec) 00:23:12.018 slat (usec): min=26, max=707, avg=29.43, stdev= 6.11 00:23:12.018 clat (usec): min=15, max=973, avg=190.95, stdev=71.60 00:23:12.018 lat (usec): min=44, max=1009, avg=220.38, stdev=72.62 00:23:12.018 clat percentiles (usec): 00:23:12.018 | 50.000th=[ 198], 99.000th=[ 326], 99.900th=[ 523], 99.990th=[ 766], 00:23:12.018 | 99.999th=[ 971] 00:23:12.018 write: IOPS=8795, BW=34.4MiB/s (36.0MB/s)(339MiB/9870msec); 0 zone resets 00:23:12.018 slat (usec): min=12, max=345, avg=23.72, stdev= 6.44 00:23:12.018 clat (usec): min=87, max=1973, avg=438.97, stdev=71.69 00:23:12.019 lat (usec): min=109, max=1997, avg=462.69, stdev=74.16 00:23:12.019 clat percentiles (usec): 00:23:12.019 | 50.000th=[ 441], 99.000th=[ 725], 99.900th=[ 922], 99.990th=[ 1287], 00:23:12.019 | 99.999th=[ 1975] 00:23:12.019 bw ( KiB/s): min=32328, max=37920, per=98.58%, avg=34684.63, stdev=1523.00, samples=19 00:23:12.019 iops : min= 8082, max= 9480, avg=8671.16, stdev=380.75, samples=19 00:23:12.019 lat (usec) : 20=0.01%, 100=5.92%, 250=30.36%, 500=59.99%, 750=3.30% 00:23:12.019 lat (usec) : 1000=0.39% 00:23:12.019 lat (msec) : 2=0.03% 00:23:12.019 cpu : usr=98.01%, sys=0.75%, ctx=33, majf=0, minf=7298 00:23:12.019 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:12.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:12.019 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:12.019 issued rwts: total=83463,86815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:12.019 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:12.019 00:23:12.019 Run status group 0 (all jobs): 00:23:12.019 READ: bw=32.6MiB/s (34.2MB/s), 32.6MiB/s-32.6MiB/s (34.2MB/s-34.2MB/s), io=326MiB (342MB), run=10001-10001msec 00:23:12.019 WRITE: bw=34.4MiB/s (36.0MB/s), 34.4MiB/s-34.4MiB/s (36.0MB/s-36.0MB/s), io=339MiB (356MB), run=9870-9870msec 00:23:12.597 ----------------------------------------------------- 00:23:12.597 Suppressions used: 00:23:12.597 count bytes template 00:23:12.597 1 7 /usr/src/fio/parse.c 00:23:12.597 754 72384 /usr/src/fio/iolog.c 00:23:12.597 1 8 libtcmalloc_minimal.so 00:23:12.597 1 904 libcrypto.so 00:23:12.597 ----------------------------------------------------- 00:23:12.597 00:23:12.597 00:23:12.597 real 0m12.967s 00:23:12.597 user 0m13.087s 00:23:12.597 sys 0m1.045s 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # xtrace_disable 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:23:12.597 ************************************ 00:23:12.597 END TEST bdev_fio_rw_verify 00:23:12.597 ************************************ 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local workload=trim 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local bdev_type= 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local env_context= 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local fio_dir=/usr/src/fio 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1292 -- # '[' -z trim ']' 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1296 -- # '[' -n '' ']' 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1300 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1302 -- # cat 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # '[' trim == verify ']' 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # '[' trim == trim ']' 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # echo rw=trimwrite 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "18cf9b6e-c531-41e7-b819-f0a897f87452"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "18cf9b6e-c531-41e7-b819-f0a897f87452",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "18cf9b6e-c531-41e7-b819-f0a897f87452",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "2e3c39d4-d12c-4f70-9c74-e3e128d28f72",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "76366175-ce19-4cf6-a262-5c842a37efad",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "1fe1964f-6e27-4530-b090-077f1f0e14b3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:12.597 /home/vagrant/spdk_repo/spdk 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:23:12.597 ************************************ 00:23:12.597 END TEST bdev_fio 00:23:12.597 ************************************ 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:23:12.597 00:23:12.597 real 0m13.176s 00:23:12.597 user 0m13.190s 00:23:12.597 sys 0m1.135s 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # xtrace_disable 00:23:12.597 08:55:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:12.597 08:55:09 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:12.597 08:55:09 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:12.597 08:55:09 blockdev_raid5f -- common/autotest_common.sh@1102 -- # '[' 16 -le 1 ']' 00:23:12.597 08:55:09 blockdev_raid5f -- common/autotest_common.sh@1108 -- # xtrace_disable 00:23:12.597 08:55:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:12.597 ************************************ 00:23:12.597 START TEST bdev_verify 00:23:12.597 ************************************ 00:23:12.597 08:55:09 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:12.855 [2024-11-27 08:55:09.422449] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:23:12.855 [2024-11-27 08:55:09.422629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91143 ] 00:23:12.855 [2024-11-27 08:55:09.598081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:13.115 [2024-11-27 08:55:09.746103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.115 [2024-11-27 08:55:09.746114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.684 Running I/O for 5 seconds... 00:23:15.994 11062.00 IOPS, 43.21 MiB/s [2024-11-27T08:55:13.785Z] 11685.50 IOPS, 45.65 MiB/s [2024-11-27T08:55:14.721Z] 12275.33 IOPS, 47.95 MiB/s [2024-11-27T08:55:15.656Z] 12596.50 IOPS, 49.21 MiB/s [2024-11-27T08:55:15.656Z] 12771.20 IOPS, 49.89 MiB/s 00:23:18.896 Latency(us) 00:23:18.896 [2024-11-27T08:55:15.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.896 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:18.896 Verification LBA range: start 0x0 length 0x2000 00:23:18.896 raid5f : 5.02 6323.32 24.70 0.00 0.00 30515.21 136.84 40274.85 00:23:18.896 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:18.896 Verification LBA range: start 0x2000 length 0x2000 00:23:18.896 raid5f : 5.01 6426.94 25.11 0.00 0.00 29964.75 288.58 26452.71 00:23:18.896 [2024-11-27T08:55:15.656Z] =================================================================================================================== 00:23:18.896 [2024-11-27T08:55:15.656Z] Total : 12750.26 49.81 0.00 0.00 30238.00 136.84 40274.85 00:23:20.270 00:23:20.270 real 0m7.390s 00:23:20.270 user 0m13.542s 00:23:20.270 sys 0m0.340s 00:23:20.270 08:55:16 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # xtrace_disable 00:23:20.270 08:55:16 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:23:20.270 ************************************ 00:23:20.270 END TEST bdev_verify 00:23:20.270 ************************************ 00:23:20.270 08:55:16 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:20.270 08:55:16 blockdev_raid5f -- common/autotest_common.sh@1102 -- # '[' 16 -le 1 ']' 00:23:20.270 08:55:16 blockdev_raid5f -- common/autotest_common.sh@1108 -- # xtrace_disable 00:23:20.270 08:55:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:20.270 ************************************ 00:23:20.270 START TEST bdev_verify_big_io 00:23:20.270 ************************************ 00:23:20.270 08:55:16 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:20.270 [2024-11-27 08:55:16.883189] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:23:20.270 [2024-11-27 08:55:16.883412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91237 ] 00:23:20.528 [2024-11-27 08:55:17.073011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:20.786 [2024-11-27 08:55:17.304652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.786 [2024-11-27 08:55:17.304658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.352 Running I/O for 5 seconds... 00:23:23.246 504.00 IOPS, 31.50 MiB/s [2024-11-27T08:55:21.382Z] 507.00 IOPS, 31.69 MiB/s [2024-11-27T08:55:22.316Z] 507.33 IOPS, 31.71 MiB/s [2024-11-27T08:55:23.307Z] 538.50 IOPS, 33.66 MiB/s [2024-11-27T08:55:23.307Z] 558.40 IOPS, 34.90 MiB/s 00:23:26.547 Latency(us) 00:23:26.547 [2024-11-27T08:55:23.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.547 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:26.547 Verification LBA range: start 0x0 length 0x200 00:23:26.547 raid5f : 5.33 286.09 17.88 0.00 0.00 11310309.06 491.52 468999.45 00:23:26.547 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:26.547 Verification LBA range: start 0x200 length 0x200 00:23:26.547 raid5f : 5.34 285.38 17.84 0.00 0.00 11258007.59 226.21 465186.44 00:23:26.547 [2024-11-27T08:55:23.307Z] =================================================================================================================== 00:23:26.547 [2024-11-27T08:55:23.307Z] Total : 571.47 35.72 0.00 0.00 11284158.33 226.21 468999.45 00:23:27.921 00:23:27.921 real 0m7.884s 00:23:27.921 user 0m14.378s 00:23:27.921 sys 0m0.347s 00:23:27.921 08:55:24 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # xtrace_disable 00:23:27.921 08:55:24 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:27.921 ************************************ 00:23:27.921 END TEST bdev_verify_big_io 00:23:27.921 ************************************ 00:23:28.180 08:55:24 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:28.181 08:55:24 blockdev_raid5f -- common/autotest_common.sh@1102 -- # '[' 13 -le 1 ']' 00:23:28.181 08:55:24 blockdev_raid5f -- common/autotest_common.sh@1108 -- # xtrace_disable 00:23:28.181 08:55:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:28.181 ************************************ 00:23:28.181 START TEST bdev_write_zeroes 00:23:28.181 ************************************ 00:23:28.181 08:55:24 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:28.181 [2024-11-27 08:55:24.800584] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:23:28.181 [2024-11-27 08:55:24.800740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91338 ] 00:23:28.439 [2024-11-27 08:55:24.977790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.439 [2024-11-27 08:55:25.126015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.006 Running I/O for 1 seconds... 00:23:30.382 19551.00 IOPS, 76.37 MiB/s 00:23:30.382 Latency(us) 00:23:30.382 [2024-11-27T08:55:27.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.382 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:30.382 raid5f : 1.01 19518.93 76.25 0.00 0.00 6531.48 2040.55 8996.31 00:23:30.382 [2024-11-27T08:55:27.142Z] =================================================================================================================== 00:23:30.382 [2024-11-27T08:55:27.142Z] Total : 19518.93 76.25 0.00 0.00 6531.48 2040.55 8996.31 00:23:31.757 ************************************ 00:23:31.757 END TEST bdev_write_zeroes 00:23:31.757 ************************************ 00:23:31.757 00:23:31.757 real 0m3.391s 00:23:31.757 user 0m2.915s 00:23:31.757 sys 0m0.343s 00:23:31.757 08:55:28 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # xtrace_disable 00:23:31.757 08:55:28 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:31.757 08:55:28 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:31.757 08:55:28 blockdev_raid5f -- common/autotest_common.sh@1102 -- # '[' 13 -le 1 ']' 00:23:31.757 08:55:28 blockdev_raid5f -- common/autotest_common.sh@1108 -- # xtrace_disable 00:23:31.757 08:55:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:31.757 ************************************ 00:23:31.757 START TEST bdev_json_nonenclosed 00:23:31.757 ************************************ 00:23:31.757 08:55:28 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:31.757 [2024-11-27 08:55:28.263107] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:23:31.757 [2024-11-27 08:55:28.263301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91391 ] 00:23:31.757 [2024-11-27 08:55:28.450440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.015 [2024-11-27 08:55:28.596418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.015 [2024-11-27 08:55:28.596552] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:32.015 [2024-11-27 08:55:28.596597] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:32.015 [2024-11-27 08:55:28.596614] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:32.273 00:23:32.273 real 0m0.738s 00:23:32.273 user 0m0.470s 00:23:32.273 sys 0m0.162s 00:23:32.273 08:55:28 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # xtrace_disable 00:23:32.273 08:55:28 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:32.273 ************************************ 00:23:32.273 END TEST bdev_json_nonenclosed 00:23:32.273 ************************************ 00:23:32.274 08:55:28 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:32.274 08:55:28 blockdev_raid5f -- common/autotest_common.sh@1102 -- # '[' 13 -le 1 ']' 00:23:32.274 08:55:28 blockdev_raid5f -- common/autotest_common.sh@1108 -- # xtrace_disable 00:23:32.274 08:55:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:32.274 ************************************ 00:23:32.274 START TEST bdev_json_nonarray 00:23:32.274 ************************************ 00:23:32.274 08:55:28 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:32.531 [2024-11-27 08:55:29.046191] Starting SPDK v25.01-pre git sha1 df5e5465c / DPDK 24.03.0 initialization... 00:23:32.531 [2024-11-27 08:55:29.046407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91422 ] 00:23:32.531 [2024-11-27 08:55:29.240962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.789 [2024-11-27 08:55:29.387710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.789 [2024-11-27 08:55:29.387871] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:32.789 [2024-11-27 08:55:29.387904] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:32.789 [2024-11-27 08:55:29.387933] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:33.048 ************************************ 00:23:33.048 END TEST bdev_json_nonarray 00:23:33.048 ************************************ 00:23:33.048 00:23:33.048 real 0m0.741s 00:23:33.048 user 0m0.471s 00:23:33.048 sys 0m0.163s 00:23:33.048 08:55:29 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # xtrace_disable 00:23:33.048 08:55:29 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:33.048 08:55:29 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:23:33.048 08:55:29 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:23:33.048 08:55:29 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:23:33.048 08:55:29 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:23:33.048 08:55:29 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:23:33.048 08:55:29 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:33.048 08:55:29 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:33.048 08:55:29 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:23:33.048 08:55:29 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:23:33.048 08:55:29 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:23:33.048 08:55:29 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:23:33.048 ************************************ 00:23:33.048 END TEST blockdev_raid5f 00:23:33.048 ************************************ 00:23:33.048 00:23:33.048 real 0m50.436s 00:23:33.048 user 1m8.298s 00:23:33.048 sys 0m5.888s 00:23:33.048 08:55:29 blockdev_raid5f -- common/autotest_common.sh@1127 -- # xtrace_disable 00:23:33.048 08:55:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:33.048 08:55:29 -- spdk/autotest.sh@194 -- # uname -s 00:23:33.048 08:55:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:23:33.048 08:55:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:33.048 08:55:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:33.048 08:55:29 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@260 -- # timing_exit lib 00:23:33.048 08:55:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.048 08:55:29 -- common/autotest_common.sh@10 -- # set +x 00:23:33.048 08:55:29 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:23:33.048 08:55:29 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:23:33.048 08:55:29 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:23:33.048 08:55:29 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:23:33.048 08:55:29 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:23:33.306 08:55:29 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:23:33.306 08:55:29 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:23:33.306 08:55:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.306 08:55:29 -- common/autotest_common.sh@10 -- # set +x 00:23:33.306 08:55:29 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:23:33.306 08:55:29 -- common/autotest_common.sh@1393 -- # local autotest_es=0 00:23:33.306 08:55:29 -- common/autotest_common.sh@1394 -- # xtrace_disable 00:23:33.306 08:55:29 -- common/autotest_common.sh@10 -- # set +x 00:23:34.685 INFO: APP EXITING 00:23:34.685 INFO: killing all VMs 00:23:34.685 INFO: killing vhost app 00:23:34.685 INFO: EXIT DONE 00:23:34.944 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:34.944 Waiting for block devices as requested 00:23:34.944 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:35.203 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:35.769 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:35.769 Cleaning 00:23:35.769 Removing: /var/run/dpdk/spdk0/config 00:23:35.769 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:35.769 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:35.769 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:35.769 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:35.769 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:35.769 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:35.769 Removing: /dev/shm/spdk_tgt_trace.pid56856 00:23:35.769 Removing: /var/run/dpdk/spdk0 00:23:35.769 Removing: /var/run/dpdk/spdk_pid56620 00:23:35.769 Removing: /var/run/dpdk/spdk_pid56856 00:23:35.770 Removing: /var/run/dpdk/spdk_pid57085 00:23:36.029 Removing: /var/run/dpdk/spdk_pid57189 00:23:36.029 Removing: /var/run/dpdk/spdk_pid57245 00:23:36.029 Removing: /var/run/dpdk/spdk_pid57378 00:23:36.029 Removing: /var/run/dpdk/spdk_pid57402 00:23:36.029 Removing: /var/run/dpdk/spdk_pid57612 00:23:36.029 Removing: /var/run/dpdk/spdk_pid57718 00:23:36.029 Removing: /var/run/dpdk/spdk_pid57836 00:23:36.029 Removing: /var/run/dpdk/spdk_pid57958 00:23:36.029 Removing: /var/run/dpdk/spdk_pid58066 00:23:36.029 Removing: /var/run/dpdk/spdk_pid58111 00:23:36.029 Removing: /var/run/dpdk/spdk_pid58152 00:23:36.029 Removing: /var/run/dpdk/spdk_pid58224 00:23:36.029 Removing: /var/run/dpdk/spdk_pid58335 00:23:36.029 Removing: /var/run/dpdk/spdk_pid58812 00:23:36.029 Removing: /var/run/dpdk/spdk_pid58887 00:23:36.029 Removing: /var/run/dpdk/spdk_pid58976 00:23:36.029 Removing: /var/run/dpdk/spdk_pid58992 00:23:36.029 Removing: /var/run/dpdk/spdk_pid59147 00:23:36.029 Removing: /var/run/dpdk/spdk_pid59168 00:23:36.029 Removing: /var/run/dpdk/spdk_pid59326 00:23:36.029 Removing: /var/run/dpdk/spdk_pid59342 00:23:36.029 Removing: /var/run/dpdk/spdk_pid59417 00:23:36.029 Removing: /var/run/dpdk/spdk_pid59435 00:23:36.029 Removing: /var/run/dpdk/spdk_pid59504 00:23:36.029 Removing: /var/run/dpdk/spdk_pid59528 00:23:36.029 Removing: /var/run/dpdk/spdk_pid59723 00:23:36.029 Removing: /var/run/dpdk/spdk_pid59765 00:23:36.029 Removing: /var/run/dpdk/spdk_pid59854 00:23:36.029 Removing: /var/run/dpdk/spdk_pid61247 00:23:36.029 Removing: /var/run/dpdk/spdk_pid61464 00:23:36.029 Removing: /var/run/dpdk/spdk_pid61614 00:23:36.029 Removing: /var/run/dpdk/spdk_pid62268 00:23:36.029 Removing: /var/run/dpdk/spdk_pid62483 00:23:36.029 Removing: /var/run/dpdk/spdk_pid62634 00:23:36.029 Removing: /var/run/dpdk/spdk_pid63294 00:23:36.029 Removing: /var/run/dpdk/spdk_pid63630 00:23:36.029 Removing: /var/run/dpdk/spdk_pid63781 00:23:36.029 Removing: /var/run/dpdk/spdk_pid65199 00:23:36.029 Removing: /var/run/dpdk/spdk_pid65458 00:23:36.029 Removing: /var/run/dpdk/spdk_pid65609 00:23:36.029 Removing: /var/run/dpdk/spdk_pid67022 00:23:36.029 Removing: /var/run/dpdk/spdk_pid67286 00:23:36.029 Removing: /var/run/dpdk/spdk_pid67432 00:23:36.029 Removing: /var/run/dpdk/spdk_pid68852 00:23:36.029 Removing: /var/run/dpdk/spdk_pid69308 00:23:36.029 Removing: /var/run/dpdk/spdk_pid69455 00:23:36.029 Removing: /var/run/dpdk/spdk_pid70980 00:23:36.029 Removing: /var/run/dpdk/spdk_pid71250 00:23:36.029 Removing: /var/run/dpdk/spdk_pid71400 00:23:36.029 Removing: /var/run/dpdk/spdk_pid72910 00:23:36.029 Removing: /var/run/dpdk/spdk_pid73181 00:23:36.029 Removing: /var/run/dpdk/spdk_pid73327 00:23:36.029 Removing: /var/run/dpdk/spdk_pid74836 00:23:36.029 Removing: /var/run/dpdk/spdk_pid75333 00:23:36.029 Removing: /var/run/dpdk/spdk_pid75480 00:23:36.029 Removing: /var/run/dpdk/spdk_pid75624 00:23:36.029 Removing: /var/run/dpdk/spdk_pid76075 00:23:36.029 Removing: /var/run/dpdk/spdk_pid76843 00:23:36.029 Removing: /var/run/dpdk/spdk_pid77230 00:23:36.029 Removing: /var/run/dpdk/spdk_pid77931 00:23:36.029 Removing: /var/run/dpdk/spdk_pid78411 00:23:36.029 Removing: /var/run/dpdk/spdk_pid79209 00:23:36.029 Removing: /var/run/dpdk/spdk_pid79624 00:23:36.029 Removing: /var/run/dpdk/spdk_pid81632 00:23:36.029 Removing: /var/run/dpdk/spdk_pid82087 00:23:36.029 Removing: /var/run/dpdk/spdk_pid82529 00:23:36.029 Removing: /var/run/dpdk/spdk_pid84653 00:23:36.029 Removing: /var/run/dpdk/spdk_pid85144 00:23:36.029 Removing: /var/run/dpdk/spdk_pid85653 00:23:36.029 Removing: /var/run/dpdk/spdk_pid86731 00:23:36.029 Removing: /var/run/dpdk/spdk_pid87065 00:23:36.029 Removing: /var/run/dpdk/spdk_pid88024 00:23:36.029 Removing: /var/run/dpdk/spdk_pid88349 00:23:36.029 Removing: /var/run/dpdk/spdk_pid89304 00:23:36.029 Removing: /var/run/dpdk/spdk_pid89638 00:23:36.029 Removing: /var/run/dpdk/spdk_pid90315 00:23:36.029 Removing: /var/run/dpdk/spdk_pid90601 00:23:36.029 Removing: /var/run/dpdk/spdk_pid90667 00:23:36.029 Removing: /var/run/dpdk/spdk_pid90710 00:23:36.029 Removing: /var/run/dpdk/spdk_pid90965 00:23:36.029 Removing: /var/run/dpdk/spdk_pid91143 00:23:36.029 Removing: /var/run/dpdk/spdk_pid91237 00:23:36.029 Removing: /var/run/dpdk/spdk_pid91338 00:23:36.029 Removing: /var/run/dpdk/spdk_pid91391 00:23:36.029 Removing: /var/run/dpdk/spdk_pid91422 00:23:36.029 Clean 00:23:36.288 08:55:32 -- common/autotest_common.sh@1450 -- # return 0 00:23:36.288 08:55:32 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:23:36.288 08:55:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.288 08:55:32 -- common/autotest_common.sh@10 -- # set +x 00:23:36.288 08:55:32 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:23:36.288 08:55:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.288 08:55:32 -- common/autotest_common.sh@10 -- # set +x 00:23:36.288 08:55:32 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:36.288 08:55:32 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:36.288 08:55:32 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:36.288 08:55:32 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:23:36.288 08:55:32 -- spdk/autotest.sh@398 -- # hostname 00:23:36.288 08:55:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:36.547 geninfo: WARNING: invalid characters removed from testname! 00:24:03.121 08:55:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:05.651 08:56:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:08.938 08:56:05 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:11.472 08:56:07 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:14.003 08:56:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:16.538 08:56:13 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:19.821 08:56:16 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:19.821 08:56:16 -- spdk/autorun.sh@1 -- $ timing_finish 00:24:19.821 08:56:16 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:24:19.821 08:56:16 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:19.821 08:56:16 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:19.821 08:56:16 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:19.821 + [[ -n 5204 ]] 00:24:19.821 + sudo kill 5204 00:24:19.830 [Pipeline] } 00:24:19.849 [Pipeline] // timeout 00:24:19.857 [Pipeline] } 00:24:19.874 [Pipeline] // stage 00:24:19.881 [Pipeline] } 00:24:19.897 [Pipeline] // catchError 00:24:19.908 [Pipeline] stage 00:24:19.911 [Pipeline] { (Stop VM) 00:24:19.926 [Pipeline] sh 00:24:20.204 + vagrant halt 00:24:24.390 ==> default: Halting domain... 00:24:29.692 [Pipeline] sh 00:24:29.974 + vagrant destroy -f 00:24:34.165 ==> default: Removing domain... 00:24:34.176 [Pipeline] sh 00:24:34.450 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:24:34.459 [Pipeline] } 00:24:34.474 [Pipeline] // stage 00:24:34.480 [Pipeline] } 00:24:34.494 [Pipeline] // dir 00:24:34.500 [Pipeline] } 00:24:34.518 [Pipeline] // wrap 00:24:34.524 [Pipeline] } 00:24:34.537 [Pipeline] // catchError 00:24:34.553 [Pipeline] stage 00:24:34.555 [Pipeline] { (Epilogue) 00:24:34.568 [Pipeline] sh 00:24:34.848 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:41.422 [Pipeline] catchError 00:24:41.424 [Pipeline] { 00:24:41.438 [Pipeline] sh 00:24:41.719 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:41.719 Artifacts sizes are good 00:24:41.728 [Pipeline] } 00:24:41.743 [Pipeline] // catchError 00:24:41.755 [Pipeline] archiveArtifacts 00:24:41.762 Archiving artifacts 00:24:41.935 [Pipeline] cleanWs 00:24:41.951 [WS-CLEANUP] Deleting project workspace... 00:24:41.951 [WS-CLEANUP] Deferred wipeout is used... 00:24:41.959 [WS-CLEANUP] done 00:24:41.961 [Pipeline] } 00:24:41.978 [Pipeline] // stage 00:24:41.984 [Pipeline] } 00:24:42.000 [Pipeline] // node 00:24:42.005 [Pipeline] End of Pipeline 00:24:42.039 Finished: SUCCESS